You enter a store and immediately a virtual assistant tells you where the products you need are because they have access to your preferences, finish your purchases in 10 minutes and return home avoiding traffic thanks to technology.In the end, what took you 3 hours to complete takes half an hour.Can you imagine living this way in a few years?
The artificial intelligence will change the way we live and work, and is already doing: Siri is a clear example of virtual assistant and Facebook recently showed that his assistant Artificial Intelligence developed its language to communicate.
What Other Changes Will We Have By 2020?
Well-known head hunters or human resources recruiters spend more than half of their time reviewing resumes. This pattern could change in a few years thanks to new technologies.
The artificial intelligence (AI) may collect and process data from a lot of sources to find the person with the most appropriate profile for a job position.
“And like shopping sites, AI is made to learn from experience and get a better idea of what administrators want,” said Ted Greenwald, editor of the Wall Street Journal. “This software usually works in two ways: it identifies the most promising CVs among what may seem like an unmanageable avalanche, or they expand the network so that employers can find a more diverse group of candidates than they would have chosen alone”.
A New Method of Employee Selection:
Creation of Personalized Medicine:
Thanks to big data, doctors will be able to adapt the treatments to each patient thanks to a personalized base with all the data of their medical history and analysis of data from various sources of information.
Automatic Music Composition:
The artificial intelligence can compose music with all kinds of melodies. At present, there are already examples that allow you to compose personalized melodies. Senior is a software developed by researchers Michael Zobe, Waldo Green and Molina of the University of Chicago, USA. This software is capable of composing emotional music with the use of techniques and characteristics of artificial intelligence. To design and develop the system, the researchers worked with the abstract representation of the concepts necessary to manage emotions and feelings.
Our Identity in The Hands of a Virtual Assistant:
Can you imagine a virtual agent identifying your identity to access anywhere? Developing voice recognition, face patterns, fingerprints, among others, would be one of the most expensive technologies to implement in the world. But it would reinforce the security of your identity, leave behind remembering complicated passwords and facilitate the detection of criminals. The Nuance Communications Company launched the first virtual assistant with integrated multifactor biometric security called NINA ID 2.0 that can identify you by the sound of your voice or a “selfie”.
Energy Savings Thanks to Home Automation:
One of the characteristics of artificial intelligence is its ability to learn patterns. By memorizing the schedule of tenants of houses can allow energy savings, in addition to adjusting the temperature of a home automatically. Thanks to the joint work of professionals in Industrial Mechatronics and Systems Development, home automation is one of the most developed areas in artificial intelligence and robotics that could save millions a year.
Bitcoins and Blockchain to Make Transactions:
It is similar to a virtual logbook, the blockchain is a database of transactions created to ensure the use of virtual currencies, bitcoins. The technology behind the blockchain prevents the use of currency more than once, thereby ensuring transparency and security of transactions, regardless of the value of the currency used.
The tool solves an old problem in the financial market: the assets are recorded, kept up to date and made available to regulators, which should modify the entire industry. Nasdaq (US stock exchange), for example, already uses blockchain to store information about the assets of some publicly traded companies, such as Amazon and Apple.
Vehicles with Autonomy:
In addition to changing the experience and changing the mobility of consumers, driverless cars will have an impact on the performance of car manufacturers and traffic planning in cities and roads.
There is currently a race among car manufacturers to see who launches the first autonomous vehicle. The term proposed by them is the year 2020, however, mass production of vehicles could only take place after 2025, when the technology has an affordable price.
The future is so promising that IHS Automotive, a market consultant in the automotive sector, believes that in 2050 nobody will have to put their hands on the wheel of a car. Currently, companies like Tesla and Google already have technology that allows you to drive a car automatically, without human intervention.
Virtual Intelligence to Teach and Learn
The “Artificial Intelligence and Life in 2030” report promoted by the University highlights that virtual reality, educational robotics, intelligent tutoring systems, and online learning or learning analytics such as technologies will occupy a prominent place in classrooms in a few years.
Imagine the recreation of a historical event as part of your history class at school or visit the famous Paris museum and see the painting of La Gioconda from your classroom. You can immerse yourself and explore subjects from various disciplines without leaving your site thanks to virtual reality.
The virtual intelligence is also gaining strength in the education sector. A clear example is Duolingo, who focused on learning foreign languages. This app can detect errors in speech, correct and help progress the student’s pace.
We must not forget the educational robotics that allows students to create and program their robots. In this way, they develop logical, deductive thinking and creativity.
Improvement for Humanity
Continuing, we have a comment that seeks to challenge a debate that exists from the movies; also indirectly with the first industrial revolution. We refer to the discussion if Artificial Intelligence is a threat to employees, to which Insight UK says no.
“Despite the fears that it will replace human employees, in 2020 AI and machine learning will be increasingly used to help and increase them. For example, customer service workers should make sure they give customers the right advice. AI can analyze complex customer queries with a large number of variables, then present solutions to the employee, accelerating the process and increasing employee confidence. ”
This is the position of Felix Gerdes, director of digital innovation services at Insight UK. Stressing that there are currently companies with this vision of Artificial Intelligence, as Lufthansa does.
GPUs will Continue to Dominate AI Acceleration
AI hardware accelerators have become the leading competitive front in high technology. Even though rival technologies such as CPUs, FPGAs, and neural network processing units gain share in edge devices, GPUs will remain in the game thanks to their central role in cloud-to-edge application environments such as standalone vehicles and network chains.
Nvidia’s industry-leading GPU- based offerings appear poised for growth and adoption by 2020. However, over the next decade, several non-GPU technologies – including CPUs, ASICs, FPGAs, and neural network processing units – will increase. Its performance, cost, and energy efficiency advantages for various high-end applications.
With each passing year, Nvidia attracts more competition. Industry-standard AI benchmarks will become a competitive battlefront. As the AI market matures and computing platforms dispute the distinction of being the fastest, most scalable, and least costly workload handling, industry-standard benchmarks will gain importance.
Last year, MLPerf benchmarks became more competitive as everyone from Nvidia to Google boasted of their superior performance. By 2020, AI benchmarks will become a key market entry strategy in a segment that will only grow more commoditized over time.
Over the decade, the results of the MLPerf benchmark will feature in solution vendor positioning strategies whenever high-performance AI-driven capabilities are critical.
A Two-Horse Race
AI modeling frameworks are the primary environments in which data scientists construct and train statistically driven computer graphics. By 2020, most experts will likely use some mix of TensorFlow and PyTorch on most projects, and these two structures will be available on most commercial workbenches.
As the decade progresses, the differences between these structures will diminish with the appreciation of resource parity over strong functional differentiation. Similarly, more AI tool vendors will make structure-independent modeling platforms available, which can bring new life to older structures that are at risk of extinction.
Accelerating the spread of open AI modeling platforms is the industry’s adoption of multiple layers of abstraction – such as Keras and ONNX – that will allow a model built on the front end of a framework to run on any backend of the supported framework.
By the end of the decade, it will be almost irrelevant which front-end modeling tool you use to build and train your machine learning model. No matter where you build your AI, the end-to-end data science pipeline will automatically format, compile, contain container, and serve you for optimal execution anywhere from cloud to edge.
Corporate AI will Shift to Continuous Real-World Experimentation
Every digital transformation initiative relies on leveraging the most appropriate machine learning models. This requires real-world experimentation in which AI-based processes test alternative machine learning models and automatically promote those that achieve the desired result. By the end of 2020, most companies will implement real-world testing on all customer-facing and back-end business processes.
As business users turn to AI tools cloud providers, features such as those recently launched by AWS – model iteration studios, multi-model experiment tracking tools, and model tracking leaderboards – will become standard in all 24×7 IA-based business application environments. Over the decade, AI-based automation and DevOps capabilities will generate a universal best practice for process optimization.
AI will Automate The Central Modeling Function
Neural networks are the heart of modern AI. By 2020, an AI-driven methodology will be disseminated to automate the practice of creating and optimizing neural networks for their intended purposes. As neural architecture research gains adoption, it will increase the productivity of data scientists by driving their decisions about designing their models in established machine learning algorithms such as linear regression. As the decade progresses, this and other approaches will enable continuous AI DevOps through end-to-end automation.
AI-driven conversation interfaces eliminate the need for practical application in most applications.
Understanding AI-based natural language has become incredibly accurate. People are quickly using “hands-free” on their cell phones and other devices. As conversation interfaces gain space, users create more text through voice inputs. By the end of 2020, more text, tweets, and other verbal information will be rendered by AI-oriented voice assistants embedded in devices of all types.
Over the decade, voice assistants and the chat interface will become a standard feature of products in all segments of the global economy.
Chief Legal Officers Will Demand end-to-end AI Transparency
AI is becoming a more salient risk factor in enterprise applications. As companies face an increase in lawsuits for socioeconomic bias, privacy violations, and other AI-driven application impacts, legal counsel will require a thorough audit trail that reveals how the machine learning models used in enterprise applications were built, trained and managed.
By the end of 2020, the chief legal officers of most companies will require their data science teams to automatically record every step of the machine learning pipeline and provide a clear language explanation of how each model generates automated inferences. Over the decade, the lack of integrated transparency will become a predominant factor in denying AI project funding.
Finally, we can safely assume that requests for AI-based resource regulation across all products – especially those using personally identifiable information – will grow in the coming years.
Beyond the increasing emphasis on transparency of AI devices, it is too early to say what impact these future mandates will have on the evolution of the underlying platforms, tools, and technologies.