This is an extract of a keynote presentation I gave at the Sci-Bono Discovery Centre on Thursday, June 21 2012.

The advanced development of the internet and the cornucopia of information it provides has only been in existence for just over 18 years.

The internet at its current growth rate and development stands to be the greatest machine ever built in the history of humanity. This machine also happens to be the most reliable machine human beings have ever constructed. It has never crashed before and has always run uninterrupted. Consider the usage of the internet too.

There are over 100 billion clicks per day online, there are approximately five trillion links between all internet pages in the world and over two million emails are sent per second from all around the planet. The internet also accounts for five percent of all electricity used on the planet to keep it running continuously.

An approximation of the internet in terms of size and complexity resembles the way a human brain would function. The internet however is continuing to grow in size and complexity every two years.

At the rate at which the internet is evolving, it is projected that by the year 2040, the internet will be able to store more knowledge and information and be able to operate at a higher level of cognisance than the whole of humanity combined.

In a hyper-connected world, the metric is no longer physical space that limits society, but clicks. The advent and continuous evolution of the internet has had the profound effect of pulling society out of two-dimensional space. The internet has broken boundaries that society has relied upon to define itself and to protect itself.

As Tim Berners-Lee theorises, in the future human beings will have a natural balance in using the creative and analytical parts of their brains. Berners-Lee believes that in the future, human beings will be able to solve large analytical challenges by turning computer power loose through the power of a notion of a semantic web.

However, before we can even contemplate a semantic web or evolution to a web 3.0, I need to explain the various stages that the web have evolved into and how exactly a semantic web or web 3.0 would be able to come into existence.

Web 1.0, or the information web, was straightforward enough. It was full of static content and could be seen as an extension of offline media, such as print and TV. This version of the web was able to provide information to users in a broadcast model of information dissemination.

In the case of web 1.0, producers created content for online users to use and have the ability to share in a limited manner with others online.

The next evolution of the web brought about web 2.0 or the social web which is characterised by users communicating, contributing and collaborating.

Social networks, live chat, IM, folksonomies, mash-ups, virtual worlds and even mobile media are part of the web 2.0 landscape.

These forms of collaboration and sharing breaks down the traditional media broadcast model or monolithic method of communication and content generation characterised by the previous generation of the Web.

Web 2.0 has empowered users and consumers of the web to shift from being passive consumers of content and information into active producers of content and information. It allows users to equally participate in the production of content creation and in sharing that content with a wider audience online.

My book, Imagining Web 3.0, attempts to develop a prescient theory of what the next stage of the web (web 3.0) will look like. Web 3.0 has also been the called the semantic web because the software that will allow the concept of a web 3.0 to come to fruition will have the power to learn, intuit and decide. This version of the web derives its “wisdom” from software that learns by looking at online content, analyses the popularity of that content and has the ability to draw conclusions.

In other words, instead of users refining information and opinion online, intelligent software would have the ability to do so.

Web 3.0 has the potential to change the entire process by bringing machines closer to users and producers which would result in more dynamic, interactive and efficient creation of content online as well as the management of that content.

The major premise of web 3.0 is based upon linking, integrating and analysing data from various data sources into new information streams.

The basic feature of web 3.0 is to allow a person or a machine to begin with a single database and then have the ability to increase its access to infinity databases which are not connected by wires but on the basis of some common elements such as place, concept, age and so on.

Even though academic researchers, media theorists, software developers and online users may have differentiating definitions of what web 3.0 entails, one common definition that is shared by all stakeholders is that web 3.0 will result in the personalisation of the internet.

The linking of data in web 3.0 is achieved with the assistance of semantic technologies such as the Resource Distribution Framework (RDF) and SPARQL, which is a standardised query language for RDF data, which is currently in use to assist in the development of the semantic web.

Web 3.0 mainly will operate on RDF which is a standard model for data interchange on the internet. RDF was designed to provide a common way to describe information found online so it can be read and understood by computer applications.

RDF is written in XML so that it can be easily exchanged between different machines using different operating systems.

RDF also joins the structure of the internet with Uniform Resource Identifiers (URIs) which allows original data in each database to be able to form in an original form such as XML and Excel.

The base of web 3.0 applications exist in RDF for providing a way to link data, which has been created in the current web 2.0 era, from multiple website or database sources.

With SPARQL, a query language for RDF data, applications can access native graph based RDF stores and extract data from traditional databases.

Web Ontology Language (OWL) is another language which can play a fundamental role in the applications and development of Web 3.0.

OWL and RDF are very similar in theory to one another; however OWL is seen as a stronger language with greater machine interpretability than that of what RDF can offer.

OWL is built on the top of RDF but comes with a larger vocabulary and stronger syntax than that of RDF.

Technology and the current data from web 2.0 are the two main building blocks needed for the creation of the concept of a semantic web. By integrating these blocks, the vision of web 3.0 will comprise of two platforms, namely semantic technologies and a social computing environment.

The notion of a social computing environment means that Web 3.0 focuses on the “human-machine” relationship and desires to organise a large number of current social web communities.

These semantic technologies have the potential to play an important role in the future of the evolution of the web 3.0, as well as stimulate creativity and innovation by minimising the distance between man and machine, discover new business models by shortening the innovation cycle, and prompt the move towards true globalisation. From an online user perspective, it will merge the social web community through the use of semantic web technology RDF.

The key factor in determining the success of the semantic web is for information found on the Web to be presented and labelled so it makes sense to machines.

That can be achieved by the internet’s architecture adapting and adopting the RDF, OWL and SPARQL.

The evolution to the semantic web will enable devices such as PCs, tablets, private hand-held devices (cellphones) or any other internet-enabled terminals to serve as portals or private windows into the web, to organise and give well-defined meaning to the vast amount of information found on the internet, and allow machine reasoning which is able to access information found online to be ubiquitous and devastatingly powerful.

Web 3.0 can tentatively be described as the “web of openness”. This refers to an internet that has the ability to break down old silos, link everyone everywhere together and make the entire web potentially smarter.

Author

  • Lee-Roy Chetty holds a Master's degree in Media studies from the University of Cape Town and the University of Massachusetts, Amherst. A two-time recipient of the National Research Fund Scholarship, he is currently completing his PhD at UCT and is the author of a book titled – Imagining Web 3.0 Follow him on Twitter @leeroy_chetty. He can also be contacted via e-mail at [email protected]

READ NEXT

Lee-Roy Chetty

Lee-Roy Chetty holds a Master's degree in Media studies from the University of Cape Town and the University of Massachusetts, Amherst. A two-time recipient of the National Research Fund Scholarship, he...

Leave a comment