The Turing test remains a popular concept in the mainstream consciousness regarding the future of AI.
Research eventually branched off from machine translation toward other linguistic applications. One of the most notable examples is ELIZA, a program designed to simulate conversation with a psychotherapist.
ELIZA used a simple process of placing weighted values on key words, and using these weights to reorganize the input sentences into a different form, often a question. This mimicked the way a psychotherapist reflected what a patient would say during a session.
Needless to say, there was no true understanding on the bolivia mobile database part of the machine during this whole process, only an algorithm. But that didn’t stop people from being convinced of the AI’s intelligence.
ELIZA was called the first “chatterbot”. It is the forerunner of today’s chatbots and the infinitely more sophisticated LaMDA system of Google.
The First and Second AI Winter
Starting from the late 1960s, research into AI decreased drastically due to cuts in funding during two periods.
The first began in 1966 in the aftermath of a report by ALPAC (Automatic Language Processing Advisory Committee), which declared that research into computational linguistics, in particular machine translation, was a failure, and that the applications of such systems were too limited to be of use.
This state of affairs would last roughly two decades. The 1980s saw a resurgence of optimism in the possibilities of AI due to advances in computing technology, but it was followed by another short period of decline as the new systems proved too expensive to maintain. It wasn’t until the turn of the millennium that the tech would truly catch up and progress began to accelerate.