Describing Content in Intelligent Terms
The Web itself is becoming a kind of cyborg intelligence: human and machine, harnessed together to generate and manipulate information.
If automatability is to be a human right, then machine assistance must eliminate the drudge work involved in exchanging and manipulating knowledge. This quote is from the paper "The Evolution of Web Documents: The Ascent of XML" in which the authors illuminate the potential of XML as a markup language that describes content in intelligent terms.
XML was created to function behind the scenes, providing machines with information about documents that will facilitate decision making at the machine level.
Intelligent Agents
Entitites or computer programs that learn from their environment and can act based on what they have learned can be defined as intelligent agents. These agents can be as simple as triggering an alarm in case of a fire or as complex as human beings. Intelligent agents and
their applications to solve real-world problems are getting smarter and diversified day by day. Whether it is an autonomous intelligent agent working for ambient intelligence, or a rational agent mining the trends of a stock market, a bot to negotiate an online bid, or a virtual
customer to buy books for you, one can see the applications and use of intelligent agents everywhere.
This age of information overload and ever-growing contents creation on the world-wide-web with billions of pages per day presents some unique problems such as
- real-time recommendations,
- data mining,
- abstracting useful information, and
- search optimization
based on your unique
profile. Intelligent agents with their ability to work with humongous amount of data - usually fed by social networks and services like twitter and blogs -, scalability, robustness, and capability to learn from the environment makes them a promising candidate to solve
these problems.
Link analysis
Link analysis for web search spawned a surge of interest in mathematical techniques for improving the user experience in information
retrieval. Although developed initially to combat the spamming of text-based search engines, link analysis has over time become susceptible to innovative forms of link spam; the battle continues between web search engines seeking
to preserve editorial integrity and their adversaries with strong commercial interests.
Such link analysis is but one visible example of the broader area of social network analysis, which originated with the classic experiment from the
1960's due to Stanley Milgram, leading to the popular myth that any two humans have at most six degrees of separation.
Over the last ten years, these ideas from social network analysis have progressed beyond networks of links
between people. The divestment of the telephone monopoly in the United States led to the study of networks of phone
calls. In the network of emails between users, dense regions are know to form around participants with common topical interests. A new breed of examples comes from so-called recommendation systems deployed at eCommerce sites on the web:
By analyzing the links between people and the products they purchase or rate, the system recommends to users what products they might be interested in, based on their and other usersâ past behavior.
Intelligent Content
Intelligent Content should enable the creation and re-use of complex,
compelling media by artists who need to know little of the technical aspects of how the tools that they use actually work.
In addition to content-based information retrieval (given an image of a panda), semantically enabled content
should open avenues for context-sensitive retrieval (given the sound of a panda eating bamboo, find an image of a panda eating bamboo).
Since the objects or content in question have intelligence, the associated contextual information should be clearer and more useful, helping the production company to find examples of panda and panda eating bamboo from other productions with similar actions.