Skip to content

Until AI Automates the World

IMG_0406

We’re all on the AI bus careening rapidly toward the End of Human Work.

Until we get there, we all still have a lot of work to do.

We talk about the transformation as if the goal is autonomous AI, automatically doing all our work. But I think there is no question that the most valuable results will be achieved by the AI-assisted super-human, producing work never before possible – or imagined. The most valuable applications of AI will be human-machine collaboration, where AI augments human jobs and humans augment AI tasks.

We are in for a long period of working with and around intelligent machines.

To date, we have mostly experienced master-slave relationships with our machines. We feed in scads of data, and machines pour out the orders, invoices, payments, computations, and categorizations we rely on to keep business going. Or, machines feed tasks to workers such as warehouse pickers who have no other motivation/control over what to do next.

Today, for the most part it is humans who see the information, have the insights, make the decisions, and take the actions. Nevertheless, in a few arenas machines are producing better results than humans. These are limited cases where algorithms are tested enough data is clean enough, systems are integrated enough, the problem is clearly bounded and context is sufficiently indicated. Sadly, these conditions are rare: The majority of today’s business systems suffer greatly from poorly integrated, context-poor, messy data deployed in support of poorly articulated strategies and goals. Improving those conditions is a gargantuan effort, an effort already underway for decades.

For decades to come, then, AI will augment most of our jobs, and automate very few. We are not even on the cusp of understanding how to do that. We need to learn how to effectively collaborate. We need new design patterns, methods, and metaphors for this new shared work.

Technology may be ready for an AI-automated world in my lifetime, but corporations, systems, and people will be struggling to catch up.

A few questions we urgently need to answer:

  • How does an organization learn to assess opportunities to apply AI, experiment, and measure the resulting impact?
  • How do organizations acquire the skills needed to be effective users of AI?
  • What are the design methods for sharing work with a semi-autonomous agent?
  • What are the design patterns for collaborating with a machine?
  • How do we design interfaces that encourage trust? Given that ML won’t make perfect decisions in every case, how do we make people comfortable enough to use systems?
  • How do we design interfaces that involve people at the right time, in the right way?
  • How do experience designers develop sufficiently deep understanding of ML to know which behavior and context information is essential to improving ML?
  • How do we design interfaces that evolve as machines learn over time, and yet feel consistent and reliable?
Advertisements

Practically Personal

How personal should you make the customer experiences you deliver? How personal can you make customer experiences?

In a previous post we described how a guy who doesn’t ski reacts to images on an ecommerce site of a woman skiing.  He believes he’d respond more if shown something he can relate to.

If we somehow knew (and that’s an issue for a future post) that this guy bicycles, we could show images of bicyclists. The question for today is, what images do we need in our library to satisfy our visitors and our customer experience goals? Do we have to address, say, 6 possible biking interests (mountain, commuting, BMX, racing, family recreation, camping); 2 genders; at least 3 age groups (child, young adult, senior); and perhaps 6 environments (urban, rural, forested, plains, mountains, coastal). That’s 17 attributes, and 216 images to satisfy all combinations. If all you sell is bikes, perhaps you can afford that. If you cover all sports, or if sports is but one of your categories, how can you possibly?

Most likely, you don’t need 216 or even 17 images to be effective with this guy who bicycles. Maybe you only need 3 images. Which ones? How many? Who knows.

The only way to know is to “test” out the impact of having a few variations. I wish I could believe that there is one answer to the question of what will make our bicyclist happy. I fear that it depends on his current context, and therefore the customer experience must be variable as well.

In this realm, “test” bears no resemblance to A/B testing. Rather, it describes a an automated, data-driven prediction of what will have the greatest impact at this moment in time. Machines can make the predictions and deliver the customer experience. The marketing team has to decide how much to invest in content variations, and which variations are most likely to be important to visitors. Automated customer experience delivery and content planning are two programs that most companies have as yet to perfect, or many have as yet to attempt.

The Right Questions for Personalization Success

path to personalization

“I’m a guy, and I don’t ski. Why are you showing me pictures of a woman skiing?”

I wish I could remember the name of the man who said this, because it is a great summary of the customer perspective of personalization. The implication is, he’d be more responsive to offers that featured guys doing his sport – whatever that might be.

His complaint surfaces what I call the 5 Introductory Personalization Questions:

  1. How can we know enough about our visitor?
  2. How do we use that knowledge to select the best experience for this moment?
  3. How do we have the right content on hand?
  4. What is the mechanism for retrieving and delivering the best content to this customer at this moment?
  5. How do we know we delivered the best experience possible?

These questions are signposts for your personalization journey, and during the journey you will ask and answer many more.

You have almost certainly talked to people who want to answer these questions with technology. Technology is unquestionably necessary, but in my experience the culture and process concerns are far more challenging. Every organization that is struggling to deliver personalized customer experience describes issues with strategy, commitment, alignment, and workflows. Any time you fool with customer experience, the ripples reach every part of the company. Somehow, that is a lesson that never gets old but must be learned and learned again.

People don’t anticipate the breadth of what they are taking on when they begin their personalization journey. As a result, they start in the middle without the provisions, collaboration or roadmap they need. With a little more knowledge of what the journey entails, progress is more certain and less expensive Here’s my [You Won’t Be] Lonely Planet Guide to help you anticipate and overcome the barriers.

Program or Task

Culture

Process

Tooling

Acquiring Customer Knowledge

What knowledge do we think is valuable?

What are we willing to collect?

What sources are acceptable?

How much resource are we willing to devote to the process?

Who owns the information?

Who owns the policies and processes?

Who will collect and analyze information to create knowledge?

Who will establish, who will manage, third party relationships for data collection?

Who is responsible for budget and planning?

Who is responsible to distribute and protect the information and knowledge?

How is customer information captured, ingested, and stored?

How is knowledge extracted from information?

In what manner is the knowledge stored?

Applying Customer Knowledge to Creating Customer Experience

What is the strategy for customer experience?

How does customer experience strategy align with business strategy?

Who owns the customer experience strategy?

What aspects of customer experience should be influenced by customer knowledge?

Who decides?

What degree of automation and what degree of explicit control is acceptable?

Who/what can use the information, for what purposes, in what circumstances?

How do various customer segmentation tactics apply to knowledge-driven customer experience?

Who designs and manages the variable customer experience?

What is the mechanism for knowing the customer?

How is the best experience for each customer

identified?

How are the elements of the experience delivered during the experience?

Provisioning Customer Experience Content

How many variations of content are we willing to fund and manage?

What sources are acceptable?

What degree of quality and consistency are required?

How doe we reconcile variations with our brand?

Who creates and tracks the content strategy and plans?

How is the content tagged, stored, improved, and replaced?

Who decides which variations will be shown in each customer experience?

What is the time frame in which the decision is made?

How is content tagged and formatted for use in various experiences, across various devices?

How is content use and impact tracked and reported?

Evaluating Results

Who is responsible for the quality of customer experience?

What is the goal for quality of customer experience?

How is quality of customer experience measured and reported?

How is the value of the experience to the customer measured?

How is the quality of content measured?

How is quality of customer experience measured and reported?

How do we measure the impact of customer experience improvements on business results by period?

How do we measure the impact on value delivered to customers?

What are the mechanisms for measuring, evaluating and communicating the quality of customer experience; and the value to our business and to customers?

Optimizing Results

Who is responsible for improving customer experience?

What are the goals for improvement?

Are the goals differentiated by customer segments, product categories, or sales region?

How is improvement measured and tracked?

How do we track progress toward goals?

How do we identify and prioritize efforts to improve customer experience?

What are the mechanisms for predicting, delivering, measuring, and evaluating what makes the best experience for each customer at each moment?

Use Cases in Cloud Infrastructure Management

Well, with a snappy title like that,  I expect I am now all alone in this room. 😉

My cloud research this year will be focused on use cases in two areas: consumption and service. I will delve into the tasks involved, and how commercially available tooling addresses those tasks.

Consumption and Cost Management

  • Application cost performance review
  • Cost and capacity planning

Service Management

  • Service level evaluation and planning
  • Availability and fault recovery planning
  • Operations Automation

Evaluation Areas:

  • Desired outcome of the activity
  • Business and infrastructure impact of the activity
  • Roles involved
  • Tasks involved
  • What problems/aspects of the activity are addressed by a vendor solution
  • How a given solution contributes to the activity- what it does, how it works
  • Overview of tech/architecture/interfaces/data
  • Vendor’s target market, pricing, strategy

Remodeling Infrastructure Management

We are in the throes of rewriting what IT infrastructure is. The shift to the cloud changes what we pay for, how we budget and plan costs, what is costly, what can be managed, what can be predicted, how quickly systems are deployed, how easily systems are moved or replicated or recovered.

This means that we will soon be in the throes of rewriting what infrastructure management does, and how it works, and who uses it. 

We do have some inkling what to expect. The last shift in the IT infrastructure paradigm —from mainframe in data centers to distributed computing dominated by client/server—happened only a quarter century ago, and the lessons are readily available. Client/server engendered entirely new development technologies, development methodologies, operations technology —and upended how IT was controlled, budgeted, and managed.

Tools that were terrific in the mainframe environment were still useful in small ways, some of the time, for parts of a few of the problems. In other words, woefully inadequate. The replacements came from new players — think Microsoft and BMC— while established players —like IBM—were slow to catch up.  The established players thought they could bolt some distributed management onto their data center management. As it turns out, the new players eventually bolted on a comparatively small bit of datacenter management onto their vast new tooling.

With cloud, we once again face a different paradigm, a different world, that demands different tools, techniques, and opportunities. Fortunately, we can apply much more sophisticated technology today than was available 25 years ago. Machine and deep learning will save our bacon this time around.

The scale and complexity of the cloud environment will dwarf anything most of have experienced or can imagine. Humans did ok with millions of events and objects to manage, using scripts and templates. When faced with billions and then trillions, tooling made it possible to handle bundles of objects and respond only to exceptional events. But we are on the frontier of zeta and yotta scale. We will be forced to automate almost all of infrastructure management. Machines will observe, analyze, optimize and act. It will be our human job to observe, analyze, optimize and act on the machines and the models they run.

A new wave of management tooling is already emerging to replace the soon-to-be-sidelined management platforms you currently rely on. A new wave of skills should be under development: you should now be spending your time building models instead of scripts.

The Future of Machine Learning

row-of-trees-3

Row of Trees 3 by Charles Plaisted

Interview with Tom Mitchell

12/15/16

Having read Tom Mitchell’s great article “Machine learning: Trends, perspectives and prospects” published in Science in July 2015, I wanted an update. He graciously submitted to an interview.

Tom M. Mitchell is a computer scientist and E. Fredkin University Professor at Carnegie Mellon University (CMU), where he recently stepped down as the Chair of the Machine Learning Department. Mitchell, the author of the textbook Machine Learning, is known for his contributions to the advancement of machine learning, artificial intelligence, and cognitive neuroscience.

Tom foresees these developments going forward:

  • Simultaneous and synergistic training of multiple functions
  • Never-ending learning
  • Conversational agents that learn by (user) instruction
  • Collaborative learners
  • Developing understanding of uses of deep learning
  • Continued expansion of computationally intensive and huge data learning
  • Continued acceleration of ML application in industry, science, commerce, finance

Tom, ML seems to be in an explosive growth phase this year. What do you see as the trends going forward?

ML is doing great, but it is a little narrow minded. There are lots of commercial applications and successes. But 99% of what ML is applied to right now is learning a single function. You give it some inputs, you get an output prediction. For example, you feed in medical records, you get a diagnosis. You’re giving it training pairs of some function, and asking it to learn that function. It’s good to be able to predict, but prediction is not the only thing ML can do.

I think a key trend will be training many simultaneous functions. The idea is to get synergy between functions that are learning: a model learns A, which makes it better at learning B, which makes it better at learning A. We’ll  start looking beyond a single task in our application of ML, to multitask learning that will simultaneously train a system.

A second related trend is never-ending learning, where a function learns to be a better learner. Currently, for most functions, the assumption is that training is turned off at some point. Or that continued feedback improves only that single function, for example, daily retraining of a single function such as spam filter, but the model doesn’t really change.

Here’s an illustration of never-ending learning: Our never-ending language learner has been running since 2010, developing along a staged curriculum that enhances itself over time. Every day it reads more text from the web, and adds more facts to its database. It now has 100 million of those facts. Every day it learns to read better than the day before. In its earliest days, it was learning to classify noun phrases, and identify simple facts. Next, it began to learn relationships between facts to create beliefs. It now can data mine its database of facts to identify these relationships, for example, it understands that if Tom is on a soccer team, Tom plays soccer. It essentially becomes a self-trainer for additional learners. It now discovers new relationships that we never told it about, expressed in the text it is reading.  For example, it has discovered the relationship “clothing worn with clothing” f(hat and gloves), “river flows through city” (Thames and London), “drug treats disease” (statins and high blood pressure). And then it looks for more examples for these relationships.

The challenge is, how do you organize or architect an agent so that the more it learns about, say, reading, the better it is at say, inference. And, the better it is at inference the better it gets at reading. I think in future we are going to see many more scenarios where ML is used in this never-ending learning construct. It seems obvious that self-driving cars need this paradigm. Or light bulbs equipped with what’s essentially cell phone functionality, that could learn about the room they are in: if someone has been lying on the floor for 10 minutes, is that typical or  anomalous?

I also expect to see the development of conversational agents that learn by instruction. Now that computers can do speech recognition, ML can take us beyond the current state of human-computer interaction. ML conversational agents will be taught by user speech, for example, “Whenever it snows at night, wake me up 30 minutes early.” The agent might then ask, “How do I know its snowing?” and the user could instruct it to open the weather app and look at current conditions. In this way, every user effectively becomes a programmer, without having to learn a programming language.

I expect to see ML open up to take on learning that is more like what humans do. Never-ending learning is still pre-commercial, but even so, our language learner is communicating with another never-ending learner, Abhinav Gupta’s image learner. Collaboration among such learners could lead to a distributed world-wide knowledge base, like the web, but understandable to computers as well as people.

The widely discussed current trend toward computationally intensive and huge data learning just keeps pushing the boundaries of what’s possible. Efforts toward better computing enable our progress, for example, new processing units such as TPU (Google’s Tensor Processing Unit) that make massive data calculations much faster.

TPU was developed in support of deep learning. Deep learning itself is the most important development in ML in the past 10 years.  It has led to dramatic improvements in learning capabilities, especially for perceptual problems like vision and speech, where it has revolutionized those fields.  Many feel deep learning is currently overhyped.  Maybe it is, but it nevertheless is the most exciting development in machine learning, and I think it will continue to progress and surprise us for many more years.

Here’s a research trend that I see accelerating: ML as an assistant for scientists. For example, in genome projects, ML finds patterns at a rate and scale humans can’t achieve. For the past decade, neuroscientists have been using ML to analyze imaging data, to decode neurosignals. I think there is a big opportunity for algorithms that could learn from the many data sets that are out there. An understanding of the brain can’t be learned from single experiment. There are thousands of published experiments, but so far, no one has found way to jointly analyze them. ML could tackle that problem. Also, in the world of ML-provided text understanding and information extraction, a science assistant could read journals for you, then extract relevant information from both the text and the experimental data associated with the article, and help you understand its relevance to your hypothesis and data. I’m describing a template that could be used in many other applications.

Finally, we are beginning the second decade of an explosion of ML, with accelerating progress and expansion of use. Decade 1 will be as nothing compared to the decade to come. There is a huge increase in the number of people and institutions working in ML. The resources devoted to ML by finance and industry are huge, dwarfing historic academic and federal funding. We are really just at the beginning of seeing the impact ML will have on our world.

Tom, what didn’t happen, that you had expected?

I kept thinking this would happen and it hasn’t: explanation based learning, which is like human learning. For example, you have a deep network and you want it to learn to play chess. One way to learn is to run a million games and see which ones you win. This is how the Go champion was defeated. This is very un-human learning. We humans like to find explanations for why things go wrong. “I lost my queen because I had to move my king to safety. That’s the last time I put both king and queen that close to a knight.” The explanation only mentions three pieces, not all the pieces. I, a human, can generalize from just one example, if I generate an explanation to determine what went wrong and why., instead of a zillion examples and statistics. Not every chess piece in every position is equally important (which is the initial statistical approach). Explanation-based learning can create a less-data intensive approach. But, I’ve been waiting 20 years for what I think should be this big trend. Meanwhile simpler algorithms applied to bigger data sets with faster computers weaken the motivation to pursue this.

Evolution to Personalization: 3 Maturity Levels

How will you mature your digital marketing?

If you expect to excel in your market via superior customer experience, targeting, or personalization, you need a culture of optimization – measuring, improving, and predicting.

The strategy of using audience data to improve customer experience and optimize results has become this decade’s gold rush — for marketers and solution providers.The underpinning of that strategy will surely involve the technologies that come under the umbrella of optimization. 

When you’ve reached that step, you need to take the next quantum leap: a culture of personalization, embracing content strategy and curation, and automating the delivery of the right audience experience using prediction.

I attempt to explain the maturation via this prezi: (if you are new to prezi, you click “start prezi” and then use -> to move thru it; close your eyes if you get motion sickness…)

 

 

 

 

 

 

 

 

 

 

 

 

%d bloggers like this: