artificial intelligence News for September 07 2017

Microsoft Goes All In On Artificial Intelligence

Between Cortana and chatbots, it’s clear Microsoft wants to create smarter interactions between people and their technology. However, the company has never …

AI Ethics: Artificial Intelligence, Robots, and Society

Why Build AI? If robots might take over the world, or machines might learn to predict our every move or purchase, or governments might try to put the blame robots for their own unethical policy decisions, then why would anyone work on advancing AI? My personal reason for building AI is simple: I want to help people think. The idea is not that we should abuse robots The idea is that robots, being authored by us, will always be ownedcompletely. Here is a list of my AI / Robot ethics publications: I now have two outstanding PhD students working on AI ethics and in particular the transparency of AI systems. The chapter says companion is the wrong metaphor for robots, which leads to the misallocation of both resources and responsibility to the detriment of our society. It’s an exciting 30-minute discussion of how AI is and will affect our society, but the Guardian pitched it as Do we want robots to be like humans, which might undersell it.
more…

Intelligent Machines

Zabaware Text-to-Speech Reader The Zabaware Text-to-Speech Reader is an application that uses a speech synthesizer to read documents and more outloud. Assists people with reading disabilites and concentration problems. Quickly devour large amounts of reading material through speed reading. Simply set the speech speed high and read along as the program flashes on the screen the word it speaking. Using the science of rapid serial visual presentation the program helps reduce eye movement while reading and adds supplementary fast spoken speech for increased comprehension.
more…

Negative Effects of Artificial Intelligence

In today’s technological landscape, the advent of artificial intelligence – AI – may become one of mankind’s greatest achievements, but as renowned physicist Stephen Hawking warned, “It might also be the last, unless we learn to avoid the risks.” Artificial intelligence is defined as a computer or machine being able to perform “Logical deduction and inference,” and “Make decisions based on past experience or insufficient or conflicting information.” Current tech levels already pose frightening ethical dilemmas, such as whether military drones or other robotic systems should be designed to use lethal force against targets without direct human involvement. Incorporation of artificial intelligence will drastically shift humankind’s concept of work. One significant threat of artificial intelligence is whether it chooses to use already enhanced abilities to create machines of even greater cognitive power. Hawking stresses that any serious discussion of artificial intelligence must take into consideration the potential threats and how to manage them, and calls for more critical, institutional research as increasing corporate resources are devoted to realizing breakthroughs in creating artificial intelligence.
more…

Elon Musk thinks artificial intelligence could cause World War III

Renowned for his concerns over artificial intelligence and its potential negative impact on humanity, tech titan Elon Musk has made his most concerning comments yet surrounding AI. It could be the cause of World War 3. In a series of tweets on Monday, the Tesla, SpaceX, Neuralink and OpenAI co-founder wrote that artificial intelligence could be the eventual cause of the next world war. Musk expanded his comments, saying the war may not be caused by a country itself, but rather one of its’ AI’s, which could decide a “Prepemptive strike is most [the] probable path to victory.” Putin also warned that the development of artificial intelligence raises “Colossal opportunities and threats that are difficult to predict now.” Musk’s companies, specifically Tesla, have used artificial intelligence to enhance its products and services. In the case of Tesla, artificial intelligence has been used to aid the autonomous driving capabilities of its vehicles. This is not the first time Musk has sounded the warning bells about AI. In recent months, he has verbally sparred with Facebook CEO Mark Zuckerberg for underestimating the potential negative impact of AI, saying Zuckerberg’s “Understanding of the subject is limited.” He’s also said that artificial intelligence will “Beat humans at everything” within the next few decades, labeling it humanity’s “Biggest risk.” “Once this Pandora’s box is opened, it will be hard to close,” Musk and 115 other specialists from around the globe wrote in a letter to the U.N. ELON MUSK JOINS OTHER EXPERTS IN CALL FOR GLOBAL BAN ON KILLER ROBOTS. AI, as defined by Merriam Webster, is “The capability of a machine to imitate intelligent human behavior.” It’s not just Musk sounding the warning bells though: other luminaries, such as Stephen Hawking, have also expressed their concern about artificial intelligence. Hawking has previously said humans need to leave Earth in about 100 years due to concerns from overpopulation, climate change, disease and artificial intelligence. STEPHEN HAWKING SAYS HUMANS MUST FLEE EARTH WITHIN CENTURY. Musk, for his part, has tried to address these concerns via his latest ventures, the aforementioned OpenAI and Neuralink. OpenAI is a non-profit, co-founded by Musk and Y Combinator president Sam Altman “That aims to promote and develop friendly AI in such a way as to benefit humanity as a whole.” In an April 2017 interview with the website Wait But Why, Musk alluded to the fact that Neuralink wants “Redefine what future humans will be.”
more…

Artificial intelligence will transform universities. Here’s how

The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant. The implications of AI for university research extend beyond science and technology. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies. As part of its Open Learning Initiative, Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. With developments such as Massive Open Online Courses over the last five years, tens of thousands of people can learn about a wide range of university subjects. Technology has ‘flipped the classroom’, forcing universities to think about where we can add real value – such as personalised tuition, and more time with hands-on research, rather than traditional lectures. University administrative processes will benefit from utilising AI on the vast amounts of data they produce during their research and teaching activities. AI allows the tracking of individual student performance, and universities such as Georgia State and Arizona State are using it to predict marks and indicate when interventions are needed to allow students to reach their full potential and prevent them from dropping out. Universities will need to be attuned to the new opportunities AI produces for supporting multidisciplinarity. There is stiff competition for people skilled in the development and use of AI, and universities see many of their talented staff attracted to work in the private sector. One of the most pressing AI challenges for universities is the need for them to develop better employment conditions and career opportunities to retain and incentivize their own AI workers. The very concept of ‘deep learning’, central to progress in AI, clearly impinges on the purpose of universities, and may create new competition for them. AI can augment and empower what universities already do; but continuing their missions of research, teaching and external engagement will require fundamental reassessment and transformation.
more…

Elon Musk: Artificial intelligence may spark World War 3

While many nervous eyes around the world are watching rogue nation North Korea and its latest nuclear test, billionaire worry-wart Elon Musk warns that an international artificial intelligence race is more likely to cause World War III than a 20th century-style arms race. “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo,” the Tesla Motors and SpaceX CEO tweeted early Monday. China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo. The doomy, gloomy prognosticating apparently came in response to reports out of Russia that President Vladimir Putin is also looped in on AI’s geopolitical potential. “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” Putin told Russian students via satellite, RT reported. Putin didn’t leave it on such a super-villain-sounding note though, adding: “If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today.” Wait, so is that where North Korea got their… Oh, never mind. Musk elaborated his concerns further on Twitter, which are less about what governments possess strong AI first and more about decisions that an AI could make on behalf of a government or military to spark world war. Musk has been warning of the dangers of unfettered AI for years. Musk believes so much in the threat of AI that he has a side project, Neuralink, which aims to create a computer-brain interface in order to help keep humans more competitive in the face of emerging AI. In August, Musk also specifically tweeted that artificial intelligence poses “Vastly more risk than North Korea.” Even as the isolated country ups the ante with its nuclear program, Musk is clearly more concerned with the emergence of killer robots. Let’s just hope North Korea doesn’t start sharing videos of its new deep learning system that plays a mean game of chess. If that ever emerges, it’s almost certain Musk will never sleep again.
more…