Paperclip Maximizers, Artificial Intelligence and Natural Stupidity

Article from MIT Technology Review -- How existential risk became the biggest meme in AI
Existential risk from AI

Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI). Quantifying the probability of this risk is a hard problem, to say nothing of calculating the probabilities of the many non-existential risks that may merely delay civilization’s progress.

<

div class=”cxmmr5t8 oygrvhab hcukyx3x c1et5uql o9v6fnle ii04i59q”>

AI systems as we have known them have been mostly application specific expert systems, programmed to parse inputs, apply some math, and return useful derivatives of the inputs. These systems are different than non-AI applications because they apply the inputs they receive, and the information they produce to future decisions. It’s almost as if the machine were learning.

<

div dir=”auto”>An example of a single purpose expert system is Spambayes. Spambayes is based on an idea of Paul Graham’s. Its an open source project that applies supervised machine learning and Bayesian probabilities to calculate the likelihood that a given email is spam or not spam also known as ham. Spambayes parses emails, applies an algorithm to the contents of a giv

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Security Boulevard

Read the original article: