Artificial Intelligence News for September 07 2017

Machine intelligence makes human morals more important | Zeynep Tufekci

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand …

Artificial Intelligence

A major goal in the field of machine intelligence/artificial intelligence is to develop computers that can solve problems and accomplish tasks that are currently challenging to humans or even beyond the intellectual capabilities of humans. Today’s desktop computers play a better chess game than the multi-million dollar Deep Blue computer system. Stanford University computer science professor John McCarthy coined the phrase in 1956 to mean “The science and engineering of making intelligent machines.” In the early years of the artificial intelligence movement, enthusiasm ran high and artificial intelligence pioneers made some bold predictions. While their overall level of “Intelligence” was low, computers proved to be very valuable and cost effective aids to human intelligence. Increased speed and reliability, along with decreased cost, were key aspects of being “Better.” In addition, computers became steadily “Smarter.” Eventually, this aspect of computers came to be called artificial intelligence or machine intelligence. Artificial Intelligence is an increasingly important component of the discipline of Computer and Information Science. Artificial intelligence is a branch of the field of computer and information science. It introduces Alan Turing and the Turing Test for computer intelligence. In 1950, Alan Turing published a paper discussing ideas of current and potential computer intelligence, and describing what is now known as the Turing Test for AI. In essence, the Turing Test involves humans communicating with some one or some thing in natural language, and trying to decide whether they are communicating with a human or a computer. In a computer game setting, a computer might learn by analyzing games that it plays against itself. The most commonly mentioned is probably Artificial Intelligence, but there are others: direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Stanford University computer science professor John McCarthy coined the phrase in 1956 to mean “The science and engineering of making intelligent machines,” In the early years of the artificial intelligence movement, enthusiasm ran high and artificial intelligence pioneers made some bold predictions. The birth of artificial-intelligence research as an autonomous discipline is generally thought to have been the month long Dartmouth Summer Research Project on Artificial Intelligence in 1956, which convened 10 leading electrical engineers – including MIT’s Marvin Minsky and Claude Shannon – to discuss “How to make machines use language” and “Form abstractions and concepts.” A decade later, impressed by rapid advances in the design of digital computers, Minsky was emboldened to declare that “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” The problem, of course, turned out to be much more difficult than AI’s pioneers had imagined. According to Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT, “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence. We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”
more…

Artificial neural systems, or simply neural networks, are modeled on the logical associations made by the human brain. Using neural networks to emulate brain function provides many positive properties including parallel functioning, relatively quick realization of complicated tasks, distributed information, weak computation changes due to network damage, as well as learning abilities, i.e. adaptation to changes in the environment and further improvements based on experience. These beneficial properties of neural networks have inspired many scientists to propose them as a solution for most problems, so with a sufficiently large network and adequate training, the networks could accomplish many arbitrary tasks without knowing a detailed mathematical algorithm of the problem. Currently, the remarkable ability of neural networks is best demonstrated by the ability of Honda’s Asimo humanoid robot that doesn’t just walk and dance but can even ride a bicycle. Its exceptional human-like mobility is only possible because the neural networks that are connected to the robot’s motion and positional sensors and control its “Muscle actuators” are capable of being “Taught” to do a particular activity. The learning ability of the neural network overcomes the need to precisely define these instructions. Despite the impressive performance of the neural networks, Asimo still cannot think for itself, and its behavior is still firmly anchored on the lower-end of the intelligent spectrum, such as reaction and regulation. Recently, Siemens launched a new fire detector that uses a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or are just parts of the normal room environment, such as particles of dust. Are there limitations to the capabilities of neural networks, or will they be the solution to creating strong AI? Artificial neural networks are biologically inspired, but that does not mean that they are necessarily biologically plausible. Many scientists have published their thoughts on the intrinsic limitations of using neural networks. “Perceptron” brought clarity to the limitations of neural networks. Although many scientists were aware of the limited ability of an incomplex perceptron to classify patterns, Minsky’s and Papert’s approach of finding what neural networks are good for illustrated the impeding future development of neural networks. Although advances in neural network research have solved many of the limitations identified by Minsky and Papert, numerous still remain such as networks using linear threshold units still violate the limited order constraint when faced with linearly inseparable problems. There have been several recent advances in artificial neural networks by integrating other specialized theories into the multi-layered structure in an attempt to improve the system methodology and move one step closer to creating strong AI. One promising area is the integration of fuzzy logic, invented by Professor Lotfi Zadeh.
more…

Artificial Intelligence and the Enterprise

Artificial intelligence technology promises to solve problems organizations could not before because it delivers benefits that no humans could legitimately perform. CIOs, chief data officers, application development leaders and enterprise architects, among others, must be willing to explore, experiment with, and implement, AI capabilities to pursue new value generating opportunities. CDOs will immediately recognize that in order for AI to reach its full potential, they must develop greater organizational competency in data sciences and assure that data and analytics can be relied upon for various insights. By 2021, Gartner projects that 40% of new enterprise applications implemented by service providers will include AI technologies. Q: What are the key factors an enterprise should focus on when implementing AI? A: AI is not defined by a single technology. Rather, it includes many areas of study and technologies behind capabilities, such as voice recognition, natural-language processing, image processing. These technologies and capabilities benefit from advances in algorithms, abundant computation power, and advanced analytical methods like machine learning and deep learning. They will also need to work with application development leaders to enable applications that can change behavior based on the flow of data and events. CDOs in this industry are dealing with a very large amount of data in the form of financial transactions that must be analyzed for fraud, or customer behaviors that provide insight into what type of financial advice would be most beneficial. Another industry is healthcare where insights generated from machine learning are improving discovery, diagnosis, care delivery and patient engagement. They may also be faced with impacts in the areas of talent sourcing; skills development and training; organizational structure; analytical methodologies; analytical tools; data acquisition and monetization; algorithm acquisition/creation; analytical modeling; analytical model training and maintenance; and process adaptation. They may also need to create a skilled team of data scientists, data engineers, statisticians and domain experts who can manage the complexity of data, analytical methods and machine learning associated with AI, and help apply it with workers, customers and constituents. Without these skills, enterprises will not implement effective AI into their IT ecosystem. To avoid the pitfalls of the skills gap, CDOs should invest into their existing employees to develop both their creative and analytical thinking skills as AI implementation requires both.
more…