Artificial intelligence is intelligence exhibited by machines. On the other hand, natural intelligence is the intelligence exhibited by humans and animals. The study and development of AI aims to simulate human functions and behavior. In computer science, the field of AI research defines itself as the study of "intelligent agents," any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving.
Artificial Intelligence, designed to perform a narrow task such as facial recognition, internet searches or driving a car, is known as narrow AI, weak AI or applied AI. Narrow AI can outperform humans at specific task such as solving equations. General AI, AGI or strong AI refer to AI with the ability to outperform humans at nearly every cognitive task. The timing of the arrival of human-level AI is not known but estimated by some experts to be centuries away and by researchers at the 2015 Puerto Rico Conference to be expected before 2060.
Artificial Intelligence (AI) and Machine Learning (ML) are often used interchangeably. AI is the broader concept of the ability of machines to carry out tasks in a manner that is considered “smart”. ML is a technique for realizing AI and it is an application of AI where machines are given access to data from which they learn form themselves.
Deep learning is a subset of machine learning that uses artificial neural networks (ANN). Artificial neural network (ANN) processing devices can be algorithms or actual hardware that are loosely modeled after the neuronal structure of the mammalian cerebral cortex. ANN devices are designed to classify information in the same way a human brain does. An ANN system is taught to recognize items such as images and classify them based on their elements. The system makes statements, decisions or predictions. A feedback loop enables “learning” as it can be told whether decisions are right or wrong and then modify the approach in the future. Deep learning is used in online language translation and automated face-tagging in social media. Deep learning has been used to develop computer vision to detect breast cancer based on mammogram scans. Training on neural networks is also used for natural language processing (NLP) and generation (NLG).
The concept of artificial intelligence has been existent as early as the Ancient period. In China, a life-sized human automaton was made by a mechanical engineer named Yan Shi in 1000 BC. In Egypt, a Greek mathematician popularly known as Hero of Alexandria has his own works on programmable automata which presages the modern discipline of cybernetics. In 1637, Rene Descartes, identified the division between machines that might one day learn to perform a specific task and those that would be able to adapt to any job, like the difference between narrow and general AI.
In 1956 the term "artificial intelligence" was coined by John McCarthy for his proposed research project to investigate whether features of intelligence could be simulated by computers.
In the 20th century, AI has continually developed and still advancing. The technological advances boosted the understanding of theories and forged techniques essential to artificial intelligence. AI is an integral part of the technology industry. AI techniques and advancements have been aiding in various fields in science and technology in solving challenges.
Robotics and AI law think tank launches in Singapore
The Centre for Technology, Robotics, Artificial Intelligence & the Law (TRAIL) has been set up by the National University of Singapore Faculty of Law (NUS Law). It's remit is ‘to research into legal, ethical, policy, philosophical and regulatory questions associated with the use and development of information technology (IT), artificial intelligence (AI), data analytics and robotics in the practice of law’.
Meet Mira: The App Intersecting Beauty And Artificial Intelligence
Mira aggregates reviews from the internet and provides synopses and click-throughs to purchase products and serves as a community for consumers to ask and answer questions. The app leverages facial recognition and AI to match beauty consumers with products that match their face shape, skin tone, skin type and price range.
In-store AI diabetic retinopathy tests
With the launch of artificial intelligence-powered diabetic retinopathy eye exams at its in-store CarePortMD clinics in Delaware and Pennsylvania, Albertsons has become the first retailer to offer the diagnostic technology.
Waymo, the self-driving subsidiary of Alphabet, launched its first commercial autonomous ride-hailing service in the Phoenix suburbs.
Human drivers are in every vehicle so they can monitor or take control when needed.
Artificial Intelligence inspired by sense of smell rather than the visual system
Scientists on how organisms process chemical information uncovered coding strategies relevant to problems in AI.
Artificial intelligence identifies plant species for science
Computer algorithms trained on the images of thousands of preserved plants have learned to automatically identify species that have been pressed, dried and mounted on herbarium sheets. The work, published in BMC Evolutionary Biology on 11 August 1, is the first attempt to use deep learning — an artificial-intelligence technique that teaches neural networks using large, complex data sets — to tackle the difficult taxonomic task of identifying species in natural-history collections.
A computer's newfound 'intuition' beats world poker champs
Up until this point, most games in which AI competes with humans have all the information needed to make the decision was on the board. In Texas Hold’em, the players don’t see each other’s cards, giving each of them a different view of the game. The computer program makes decisions based on incomplete information as humans do in real life.
Artificial Intelligence Solved the Mystery of Flatworm Regeneration
Researchers at Tufts University in Massachusetts aimed to find out what happens in the cells for two halves of what was once a single flatworm to become two complete, independent worms. In their paper published in PLOS Computational Biology, the researchers used artificial intelligence to solve the mystery in 42 hours.
Human-level control through deep reinforcement learning
Towards achieving artificial general intelligence, a research team built on DeepMind’s DQN and published in Nature that it learns to play different Atari games using meaningful abstraction from previous games it has played. The computer uses deep multitask reinforcement learning combined with deep-transfer learning to enable it to sue the same deep neural network across different types of games.
Google computers teach themselves to recognize cats
In the paper, "Building high-level features using large scale unsupervised learning", researchers at Stanford and Google including Jeff Dean and Andrew Ng showed that it was possible to achieve a face detector with only unlabeled data.
IBM's Watson crowned trivia king
Cognitive computing engine Watson beat champion players of the TV game show Jeopardy!, and claimed a $1 million prize. The Jeopardy task requires coming up with answers to natural language questions over broad domains of knowledge otherwise known as unstructured data, which was considered a milestone. Watson was trained by data as opposed to rules.
A statistical approach to language translation
This publication by IBM introduced the principles of probability into the until-then rule-driven field of machine learning. It is considered a leap in terms of mimicking the cognitive process of the human brain and forms the basis of machine learning.
Digital Equipment Corporation’s XCON expert learning system was deployed in 1980 and was the first example of a real world use of AI
The XCON system checked sales orders and designed the layout of each computer order, which allowed Digital to ship components directly to the consumer site, eliminating the need for additional final assembly facilities. By 1986 the system was credited with generating annual savings for the company of $40 million.
ELIZA---a computer program for the study of natural language communication between man and machine
ELIZA, developed at MIT by Joseph Weizenbaum, is thought to be the world’s first chatbot. ELIZA represented an early implementation of natural language processing, which aims to teach computers to communicate with us in human language, rather than to require us to program them in computer code, or interact through a user interface.
The Dartmouth Conference
The Dartmouth Conference was a summer workshop where professor John McCarthy coined the term “artificial intelligence” as a branch of science. The four organizers of the 1956 Dartmouth Workshop on artificial intelligence were John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.
Artificial Intelligence and Robotics - UK-RAS White Paper
Artificial intelligence is changing the way we farm
July 24, 2019
Benefits & Risks of Artificial Intelligence - Future of Life Institute
June 1, 2016
Many Experts Say We Shouldnt Worry About Superintelligent Ai Theyre Wrong - IEEE Spectrum
Patenting Considerations for Artificial Intelligence in Biotech and Synthetic Biology
Using neuroscience to develop artificial intelligence
February 15, 2019
Documentaries, videos and podcasts
Andrew Ng: Artificial Intelligence is the New Electricity
Feb 2, 2017
Can we build AI without losing control over it?
How does IBM Watson work?
Nov 12, 2018
Machine Learning: Living in the Age of AI | A WIRED Film
Jun 20, 2019
- Machine learningA field of computer science enabling computers to learn.
- Natural language processing (NLP)Natural Language Processing (NLP) is a field of computer science wherein computer and human languages interact. Programming computers to process vast amount of natural language data.
- Natural language understandingNatural Language Understanding (NLU) is a subtopic of natural language processing in artificial intelligence that deals with machine reading comprehension. NLU is considered an AI-hard problem.
- Deep learningBranch of machine learning based on learning data representations.