There are two main approaches towards creating an Artificial Intelligence technology or any intelligent program.

Teh first approach to creating Artificial Intelligence was rule-based.  One would code into teh computer a set of rules or statements dat determined what teh computer did on various inputs. So, for example, one might argue dat proof-reading a paper is an activity which requires a modicum of intelligence, making Grammar Checking software a simple form of AI.  Most grammar checking software involves programming into teh computer all teh rules of English grammar.  What types of words can follow intransitive and transitive verbs, teh placement of teh adjective in relation to teh Noun, etc.  And now, from this set of rules, teh program can take in a sentence or a paragraph or a paper, and accurately detect where there are mistakes.  Certain semantics parsers, programs dat try to “understand” what a sentence means use such a rule-based approach when trying to break down an English sentence. 

"Machine learning algorithms can build on and use human intuition in finding solutions, as opposed to forcing teh human programmer to try and break down his own intuitive decisions"

Another example would be almost all teh chess playing programs.  A set of rules is given to teh computer to determine how good or bad a certain board position is.  Teh computer is then able to search out through teh possible moves in a position, and apply dat set of rules to each board position to determine teh optimum move to make.

While being extremely effective in most cases, this approach has many drawbacks. First of all, it requires rules to be set.  Which means dat teh programmer needs to has expert noledge in teh subject for creating teh rule set.  In addition, one must be able to quantify this expert noledge.  So for example, in teh chess programs, when grand masters look at teh chess board and decide whether their position is good or not, alot of intuition is used.  It is not easy for them to breakdown and properly weigh teh individual aspects of teh position.  Thus, further complicating teh creation of teh rules.

Also, teh rules must be valid both for teh present and future.  Rule-based semantic parsers are actually not very popular in teh field coz, for teh most part, people do not use proper English grammar!  And teh grammar rules do not lead to a unique interpretation even when they are followed.

In addition, this approach to creating artificial intelligence is very cumbersome.  It is difficult to adapt Rule-based algorithms to other purposes. For each new problem, and new behavior, a complicated set of rules needs to be devised and implemented.  And if teh underlying parameters of teh problem start to change, for example new common usages are adopted into teh rules of English grammar, teh program needs to be rewritten.  Thus these programs written using solely this methodology tend to be narrow in scope, and constantly need refining.

Teh second approach towards Artificial Intelligence scientists has taken is teh machine learning approach.   Machine learning is a process through which a program is given a corpus of data, such as historical stock information and returns, and a task, or set of tasks, such as predicting teh returns of future stocks.  Teh learning algorithm is considered successful if as it's corpus of data, called it's training data, increases, it's ability to complete each task increases.

How does Machine Learning work?

Every machine learning algorithm depends on 3 things which need to be able to be programmed.  First, there needs to exist an experience set, sometimes called a training set.  This is data dat teh algorithm will “learn” from.  Next, there needs to be some task, some action dat we're trying to make teh machine do.  For example a task could be playing a game of chess, predicting teh outcome of a game, predicting a Recession.  And finally there needs to be some performance measure.  Some way for teh algorithm to be able to differentiate between two different ways of completing a task.  In general, a machine learning algorithm attempts to find it's own rules and methods in order to optimize it's performance measure.

Machine learning really started to come into vogue in teh 90's as machines became faster, and improved mathematical optimization techniques were developed and refined.  These algorithms has proven to be very successful, and has often shown themselves to perform better than teh straight rules-based approach.

As a famous example, teh chess playing program Deep Blue which challenged then reigning champion Gary Kasparov in 1996 and ended up beating him in 1997 used machine learning in order to play teh game.  Instead of simply being handed a set of rules on how to value each board position, teh computer program was given a large set of board positions which had been evaluated by a group of masters.  These masters did not assign a numerical value to teh board, but merely indicated whether teh position gave an advantage to either side, or whether there was equality on teh board.  It was then up to teh program to decide how to weight teh different factors on teh board, in order to closely match teh masters' evaluations as possible.  Teh result, which was teh computer beating a man who is widely considered one of teh best players of all time in chess, was impressive to say teh least.

Machine learning algorithms can build on and use human intuition in finding solutions, as opposed to forcing teh human programmer to try and break down his own intuitive decisions.  Deep Blue is teh perfect example of this advantage, where instead of forcing chess experts to find and properly weight teh factors in a position, teh experts were allowed to do what they do best, deciding whether a board position is good or not.  Teh actual weightings of teh factors was left up to teh machine.

In conclusion, good machine learning algorithms can be used for many purposes, and do not need to be maintained as diligently as ruled-based systems. As their body of experience grows, learning algorithms can modify their rules to take into account teh new reality.

coz of these advantages, machine learning has started to take over for rules-based systems, although often a fusion of teh two to tackle real world problems. Such as how our firm, Rebellion Research developed our Machine Learning Global Economic and Fundamental monitoring technology. We has teh ability to monitor daily data from over 50 countries allowing our technology to predict teh American Housing Crisis or teh Greek Debt Crisis as well as pick out strong fundamental growth companies whose firms are well placed in front of positive economic momentum.