Artificial intelligence is intelligence exhibited by machines. On the other hand, natural intelligence is the intelligence exhibited by humans and animals. The study and development of AI aims to simulate human functions and behavior. In computer science, the field of AI research defines itself as the study of "intelligent agents," any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving.
Artificial Intelligence (AI) is classified into types based on the degree to which an AI system can replicate or go beyond human capabilities. One classification system uses four types: reactive machines, limited memory machines, theory of mind and self-aware AI. Another classification divides AI into two divisions: Weak AI or Narrow AI and Strong AI or General AI or Artificial General Intelligence. Different branches of AI are referred to by the method used to achieve AI.
Artificial Intelligence, designed to perform a narrow task such as facial recognition, internet searches or driving a car, is known as narrow AI, weak AI or applied AI. Narrow AI can outperform humans at specific task such as solving equations. General AI, AGI or strong AI refer to AI with the ability to outperform humans at nearly every cognitive task. The timing of the arrival of human-level AI is not known but estimated by some experts to be centuries away and by researchers at the 2015 Puerto Rico Conference to be expected before 2060.
Artificial Intelligence (AI) and Machine Learning (ML) are often used interchangeably. AI is the broader concept of the ability of machines to carry out tasks in a manner that is considered “smart”. ML is a technique for realizing AI and it is an application of AI where machines are given access to data from which they learn form themselves.
Deep learning is a subset of machine learning that uses artificial neural networks (ANN). Artificial neural network (ANN) processing devices can be algorithms or actual hardware that are loosely modeled after the neuronal structure of the mammalian cerebral cortex. ANN devices are designed to classify information in the same way a human brain does. An ANN system is taught to recognize items such as images and classify them based on their elements. The system makes statements, decisions or predictions. A feedback loop enables “learning” as it can be told whether decisions are right or wrong and then modify the approach in the future. Deep learning is used in online language translation and automated face-tagging in social media. Deep learning has been used to develop computer vision to detect breast cancer based on mammogram scans. Training on neural networks is also used for natural language processing (NLP) and generation (NLG).
Machine learning is a technique for realizing AI and it is an application of AI where machines are given access to data from which they learn for themselves.
Computer vision is the ability of artificially intelligent systems to “see” like humans. In the computer vision field machines are developed that automate tasks that require visual cognition. Deep learning and artificial neural networks are used to develop computer vision. The following are topics related to computer vision as well as tools and libraries. Companies developing or selling computer vision products are under the Computer Vision subheading under the AI applications and companies section.
Natural language processing is a branch of AI that helps computers understand, interpret and manipulate human language. The following are tools and topics related to NLP. NLP companies developing or selling NLP applications are found in the AI applications and companies section under Natural language processing.
Advances in deep learning are expected to increase understanding in quantum mechanics. It is thought that quantum computers will accelerate AI. Quantum computers have the potential to surpass conventional ones in machine learning tasks such as data pattern recognition. The following are topics, companies and technologies that link quantum computing and AI.
Semantic computing deals with the derivation, description, integration and use of semantics (meaning, context and intention) for resources including data, document, tool, device, process and people. Semantic computing includes analytics, semantics description languages, integration of data and services, interfaces and applications. In AI, semantic computing involves the creation of ontologies that are combined with machine learning to help computers create new knowledge. Semantic technology helps cognitive computing extract useful information from unstructured data in pattern recognition and natural-language processing.
The Internet of Things (IoT) refers to objects that connect and transfer data via the internet and the sharing of information between devices. IoT based smart systems generate a large volume of data including sensor data valuable to researchers in healthcare, bioinformatics, information sciences, policy and decision making, government and enterprises. AI can be combined with machine learning for analysis of data and prediction.
While some lines of AI research aim to simulate the human brain. Artificial life or animate approach is concerned with the conception and construction of artificial animals as simulations or actual robots. It aims to explain how certain faculties of the human brain might be inherited from the simplest adaptive abilities of animals. Evolutionary computation is a generic optimization technique that draws inspiration from the theory of evolution by natural selection.
As AI technology has matured, it has seen significant integration and adoption within the field of radiology, the science of using x-rays or other high energy radiation for diagnosis and treatment of diseases. Reliance on AI-driven applications in radiology has risen substantially; in 2020, clinical adoption of AI by radiologists was 30 percent, up from zero in 2015. AI applications for radiology are mainly used for two reasons: to improve workflow and to provide clinical decision support. With workflow applications, radiologists use AI apps to gather patient reports and exams in one place in order to analyze patient information, making it easier to interpret. Clinical decision applications are able to do a wide variety of analysis, including data analytics, image reconstruction, disease and anatomy identification, and advanced visualization. Challenges with AI in radiology include concerns for integrating AI applications, especially with an influx of applications gaining regulatory approval in recent years.
The concept of artificial intelligence has been existent as early as the Ancient period. In China, a life-sized human automaton was made by a mechanical engineer named Yan Shi in 1000 BC. In Egypt, a Greek mathematician popularly known as Hero of Alexandria has his own works on programmable automata which presages the modern discipline of cybernetics. In 1637, Rene Descartes, identified the division between machines that might one day learn to perform a specific task and those that would be able to adapt to any job, like the difference between narrow and general AI.
In 1956 the term "artificial intelligence" was coined by John McCarthy for his proposed research project to investigate whether features of intelligence could be simulated by computers.
In the 20th century, AI has continually developed and still advancing. The technological advances boosted the understanding of theories and forged techniques essential to artificial intelligence. AI is an integral part of the technology industry. AI techniques and advancements have been aiding in various fields in science and technology in solving challenges.
Artificial intelligence companies
Mira aggregates reviews from the internet and provides synopses and click-throughs to purchase products and serves as a community for consumers to ask and answer questions. The app leverages facial recognition and AI to match beauty consumers with products that match their face shape, skin tone, skin type and price range.
The Centre for Technology, Robotics, Artificial Intelligence & the Law (TRAIL) has been set up by the National University of Singapore Faculty of Law (NUS Law). It's remit is ‘to research into legal, ethical, policy, philosophical and regulatory questions associated with the use and development of information technology (IT), artificial intelligence (AI), data analytics and robotics in the practice of law’.
With the launch of artificial intelligence-powered diabetic retinopathy eye exams at its in-store CarePortMD clinics in Delaware and Pennsylvania, Albertsons has become the first retailer to offer the diagnostic technology.
Human drivers are in every vehicle so they can monitor or take control when needed.
Scientists on how organisms process chemical information uncovered coding strategies relevant to problems in AI.
Computer algorithms trained on the images of thousands of preserved plants have learned to automatically identify species that have been pressed, dried and mounted on herbarium sheets. The work, published in BMC Evolutionary Biology on 11 August 1, is the first attempt to use deep learning — an artificial-intelligence technique that teaches neural networks using large, complex data sets — to tackle the difficult taxonomic task of identifying species in natural-history collections.
Up until this point, most games in which AI competes with humans have all the information needed to make the decision was on the board. In Texas Hold’em, the players don’t see each other’s cards, giving each of them a different view of the game. The computer program makes decisions based on incomplete information as humans do in real life.
Researchers at Tufts University in Massachusetts aimed to find out what happens in the cells for two halves of what was once a single flatworm to become two complete, independent worms. In their paper published in PLOS Computational Biology, the researchers used artificial intelligence to solve the mystery in 42 hours.
Towards achieving artificial general intelligence, a research team built on DeepMind’s DQN and published in Nature that it learns to play different Atari games using meaningful abstraction from previous games it has played. The computer uses deep multitask reinforcement learning combined with deep-transfer learning to enable it to sue the same deep neural network across different types of games.
In the paper, "Building high-level features using large scale unsupervised learning", researchers at Stanford and Google including Jeff Dean and Andrew Ng showed that it was possible to achieve a face detector with only unlabeled data.
Andrew Ng: Artificial Intelligence is the New Electricity
Feb 2, 2017
Artificial Intelligence and Robotics - UK-RAS White Paper
Artificial intelligence is changing the way we farm
July 24, 2019
Benefits & Risks of Artificial Intelligence - Future of Life Institute
June 1, 2016
Can we build AI without losing control over it?
- Machine learningA field of computer science enabling computers to learn.
- Natural language processing (NLP)Natural language processing is a branch of artificial intelligence that is concerned with giving computers the ability to comprehend spoken words and text in the same way humans can.
- Natural language understandingNatural Language Understanding (NLU) is a subtopic of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU is considered an AI-hard problem.
- Deep learningBranch of machine learning based on learning data representations.