Google, Microsoft, Amazon, and Salesforce have all been heavily investing in AI. Recently, each has launched new enterprise-scale AI tools and APIs that frankly all seem the same. Is there really a difference between their cognitive services?

We recently went to the MITCNC Tech Conference in Redwood City, CA to find out. All four enterprise tech giants sent executives to share their individual approaches and vision for the future of AI. 


Google Cloud Immortalizes Big Data

Mitali Dhar, Director of Global Product Partners at Google, states that humanity is still “in the early stage of the AI technology cycle”, with Google slightly ahead of everyone else. 

As Diane Greene, SVP of Google Cloud, tells it, one AI story begins in the sixties, when “people had this vision where you could instruct computers to teach themselves through observation.” Back then, people were combining computing with data, minus the cloud.  

Data + Cloud Computing = Optimal Machine Learning
Today, machine learning trains on big data using specialized cloud processors, which Greene believes will change how we live for the better. 

To give an example, Google’s recently announced TPU hardware specializes in neural-net computation to maximize the performance of platforms such as Google Search and Google Calendar. Google developers are also leveraging APIs to improve tasks like speech and image recognition, natural language processing, and translation. They’re even announcing a video intelligence API which is built using TensorFlow and applied on platforms such as YouTube.

Sharing the Power of Deep Learning
TensorFlowGoogle’s own open source software library for machine learning, quickly surpassed all competitors to become the most popular tool for deep learning. A recent story of a Japanese boy helping his family farm automate cucumber classification shows that TensorFlow is accessible to regular programmers, and very useful for businesses.

But Google is also investing in the next generation of AI enthusiasts in other ways. In a recent move, Google Cloud acquired Kaggle“the world’s largest community of data scientists and machine learning professionals.” The acquisition enables Google to connect to a wider pool of scientists, exchange large data sets and models, and bank on rising talent.

Only the Cloud Survives
Google’s signature take on machine learning is to train each model in the cloud and the cloud alone. As Greene explains, “the data just is there, forever. We humans have a pretty imperfect way of transferring our knowledge, and when we die we lose a lot of it, but there it is in the models – getting smarter and smarter all the time.”  

Google recently emphasized at Google Cloud Next that it will continue focusing on cloud technology. Young e-companies such as Spotify are jumping on board, and Google also hopes to gain some of the more traditional cloud customers. Professor Surya Ganguli from Stanford University said it right when he boasted that “AI is electricity, data is gold.” Google’s best AI is only most promising when it’s running algorithms in a cloud full of big data.


Amazon AI Emphasizes Developer Service

Amazon Web Services (AWS) is the center of gravity for Amazon AI”, explains Swami Sivasubramanian, VP of AWS Machine Learning. Customers include household names such as Zillow, Netflix, and Pinterest who augment their expertise using AWS. 

Netflix, for example, has used Amazon AI to build a recommendation engine based on deep learning. When Netflix rapidly launched a global expansion, “all they had to do was basically rerun the models across a new set of data, and then activate them on day one.”

Focusing On Key Developer Categories 
Thus, AWS means useful cloud technology for many Fortune 500 customers. But what does AI mean for Amazon? Sivasubramanian answers this question by going back to the thousands of engineers who work on AI technologies to improve services like shipment, product recommendation, and intelligent search methods for millions of users.

Thanks to its large pool of users, Amazon is driven by the need to democratize AI. Amazon AI is Amazon’s latest innovative service package, focused on servicing the special needs of three target customers: app developers, data scientists, and ML scientists.

From Chatbots to Object Recognition
First, app developers have what is called AI Services. These include Lex, Polly, and Rekognition, three service types that include speech recognition, natural language understanding, lifelike speech, image recognition, and image analysis. Then, data scientists receive AI Platforms where visualization tools and machine learning are key. Finally, ML scientists can get their hands dirty with AI Engines combining several well-known deep learning frameworks.

Amazon AI already boasts customers such as Coursera and the Washington Post, though the company also wants to reach smaller organizations and individual learners. As Sivasubramanan shares, “the goal of Amazon AI is kind of like what AWS did with the IT industry, which is to make it easy for people to consume these technologies usefully. In the same way, we want to make all these AI technologies accessible for various layers.”

AI Learns From Consumer Experiences
Despite Amazon’s success in leveraging innovation, “we’re barely scratching the surface” says Jim Roskind, VP of Engineering. With more innovation comes new problems. Amazon is experiencing first-hand how conventional users cope with all the novel AI that’s on the menu.

During a product search, for example, “one problem is sometimes people second-guess when you give an answer.” People still don’t always trust the recommendations that Amazon puts up. “Amazon would sometimes show users Y even if they asked for X, because we have learned that people end up choosing Y anyways”, reveals Roskind.


Microsoft Is Making AI More Human

Eric Horvitz, Technical Fellow and Managing Director of Microsoft Research, is most interested in the influences of AI on people and society.

“AI advances have been diffusing society for decades”, says Horvitz, with AI reaching peak points in computational memory, digital data, and reasoning processes. But despite these AI advancements, Horvitz assures us that deep learning is “hungry for more data”.

Turning APIs Into Ecosystems
To share deep learning with the world, Microsoft is developing “a suite of cognitive service APIs that come along with a cloud service”. Microsoft Azure’s Cognitive Services suite already includes hearing and sight, and will soon showcase hand gesture recognition. They also have a popular Bot Framework that is widely used despite the PR debacle of Microsoft Tay.  

By combining areas such as speech, natural language, planning algorithms, and vision in AI applications, Microsoft is keen on building comprehensive AI ecosystems. The goal? “To assist, to empower, and to protect” for the benefit of humanity, says Horvitz.

AI With a Human Face
When it comes to assistance, Horvitz has been benefiting for 16 years from a virtual assistant at Microsoft, who’s already an expert in his whereabouts. To leverage virtual assistance for public use, Microsoft has put Cortana on multiple Microsoft platforms such as Windows 10, Skype, and recently Android. Cortana is a resourceful chatbot who can set alarms, recognize voices in different languages, and answer queries – competing with Siri and Google Assistant.

For empowerment, Microsoft launched AirSim on GitHub. AirSim was built “to fly drones with drone hardware that injects the drone command in a rich simulation environment,” complete with matching sensors and various learning algorithms to enable thousands of trials.

For protection, Microsoft wants to do social good by building models that allow “AI systems to reason at the frontier of human knowledge, rather than replicate it.” It’s now looking at how to prevent human errors in hospitals, as medical errors are the third most common hospital death.

How Do We Keep AI Ethical?
When developing AI, “not everything can be really thought of in advance”, Horvitz stresses. “To get the most out of AI, we have to dress the rough edges and downsides as well.”

Concerns about AI include the trustworthiness of robots in an open world, ethical challenges in split-second life or death situations, and new attack surfaces in cyber space. There’s also the issue of machine bias, where societal biases are amplified in datasets. Another issue making headlines is whether the intellectual power of AI will augment and replace human work.

On the other hand, people from all levels of society are coming together to ensure that AI is robust, reliable, and ethical. “The idea is to lean in,” explains Horvitz. For example, in 2009 the so-called Triple AI study was held, bringing discussions on the long-term futures of AI. In 2014, the 100 Year Study on AI was launched at Stanford University. The study will last so long as the university survives, and findings on how to keep AI ethical will hopefully last even longer.


Salesforce Extracts Intelligence From Customer Data

“At Salesforce, we’re trying to build SaaS products to help our customers serve their customers better, and try to minimize those costs,” as Steven Tamm, CTO, puts it.

Part of serving customers better means protecting customer data, which entails that Salesforce employees can’t look at the data without permission. As a result, Salesforce doesn’t do data science on customer data, but it does provide customers with the tools to shape their own data.

Malware Defense Powered By AI
What does customer data mean for AI at Salesforce? Salesforce has a team of data scientists working on customer data security by detecting malware attacks. The team knows how to distinguish customers from imposters by tracking action patterns. “Our multi-tenant application security model uses innovative anomaly detection algorithms around log mining, and enables us to do user-level profiling at terabyte to petabyte levels of behavior logs,” explains Tamm.

Salesforce’s multi-tenant application security model generates individual behavioral models based on previous behavior. It can already detect suspicious behavior after one year, allowing the response team to launch interventions where needed. To avoid flagging false positives, Salesforce uses tenant aggregates and user-specific information along with a correction factor that is based on when the behavior occurred, where it occurred, and what the user was doing.

What About Getting the Right Lead Score?
Of course, Salesforce isn’t just about data protection. Customers are using Salesforce to do business. Getting the right lead score or knowing when a big deal will close all belongs to predictive analytics, a field covered by Salesforce’s self-made AI genius. Salesforce Einstein deploys advanced ML, deep learning, and NLP to generate clever models and algorithms.

At Salesforce Einstein, “we’re selling forklifts, not forks,” claims Tamm. If forks stand for business models, then forklifts stand for the standardized schemata and data that enable business models to become more effective. The same systems can report similar findings to multiple business models, for example whether customers are happy or what they’re possibly up to next.

Sometimes, the algorithms are performing many “joint multi-tasks” in natural language processing (NLP) simultaneously. According to Chief Scientist Richard Socher, a single model can be applied to textual data for up to five language-related tasks at once.

It’s a Win-Win Situation for Data Quality and AI
While AI applications improve data quality on Salesforce, improving data quality can enable AI advancements in return. In business, for example, the structure and quantity of data typically suffers in favor of other business objectives. Low-quality data, then, makes AI applications less effective. To address this issue, AI applications are often first trained on better data. However, not all customers already have their own top-notch data.

To help smaller customers advance their data, Salesforce places its software in a multi-tenant cloud. According to Tamm, “you let the customers opt in to say what they’re willing to do with their data to achieve a certain goal.” The beauty is that customers are willing to pay to improve the quality of the data in process, while Salesforce gains the ability to process large quantities of data across multiple customers. In the end, it’s a win-win situation.