The term ‘Web 2.0’ refers to the idea of the “New Internet”, or the second wave of the World Wide Web. Web 2.0 is not a specific application or technology, but explains two paradigm shifts within Information Technology, ‘user-generated content’ and ‘thin client computing’. User-generated content refers to social networking sites such as Facebook, Myspace and YouTube, blogs, vlogs and any web application that enables users to create elaborate, personal web pages without any prior technical programming knowledge. User-generated content of Web 2.0 is changing the way we use the Internet. Users have transformed the World Wide Web into a pool of knowledge and news that is created and reported on by ‘citizen journalists’. Web 2.0 is radically changing journalism, creating new opportunities on the Internet and enhancing globalization at a pace faster than critics can comprehend. A major point of interest with Web 2.0 is the equalization on a mass scale between user, client and big corporations. Thin Client Computing refers to data and applications that are housed on a web server, providing the user with universal access to information from any computer. Although not a new concept for the World Wide Web, thin client computing has the potential to revolutionize the Internet into one giant application server for all users. Web 2.0 also refers to the application that creates information to seek out the user and provide him with specific and pointed information. Algorithms are employed to direct this information based on the user’s profile and browsing history. Finally, Web 2.0 is typified by lightweight user interfaces (or Rich Client Interfaces). Popular favorites are AJAX and Macromedia (Adobe) Flex. A “Thin Client” is considered to be a web application that utilizes the user’s browser without any additional software. So, most existing web applications are Thin Clients. A “Fat Client” is a web application that requires downloading and installing a third party application. Many VB applications are considered “Fat Client.” A “Rich Client” in contrast utilizes the features of the web browser plus a rather small amount of software to extend the functionality of the browser. This allows some additional logic to be performed in the browser itself. Putting logic in the browser makes validation and special display features easier to implement. There is also a performance improvement which improves the user experience. A popular Rich Client technology is Ajax (Asynchronous Java And XML). Ajax uses JavaScript to send requests for data from the browser directly to the remote server. The server responds with data in the form of XML. Since the JavaScript is running asynchronously, the user doesn’t have to wait for the response to make another request (as is true of “Thin Client” applications). Good examples of this technology are Google Maps and Google Mail.
The Design Aspects of Web 2.0 There are a number of design aspects that separate Web 2.0 from Web 1.0. With Web 1.0, a small group of writers would generate web pages that would be exposed to a large number of viewers. Because of this, it became possible for viewers to go directly to a source to retrieve important information. However, a number of changes have occured.
First, many people have now gone from simply viewing information to being responsible for writing and publishing it on the web. A good example of this is the rapid popularity of the blogging industry and bloggers. Instead of simply being viewers, they have become participants.
The effect of so many people adding information to the web is that there is a tremendous amount of information available. As more people begin to self publish on their websites and blogs, it became obvious that a new change in design was necessary for Web 1.0. The result of this change is Web 2.0. With the design for Web 2.0, data on the web can be split into microcontent that can be distributed over a large number of domains. This is important for a number of reasons. First, people are no longer looking for older sources of data. The goal of Web 2.0 is to use a selection of tools that can alter microcontent in a way that is useful.
The tools for Web 2.0 will be responsible for creating a new design interface. There are a number of modern systems that are facilitating this process, and one of them is RSS aggregators. In addition to this, search engines are also playing an important role in the Web 2.0 process. The introduction of Google maps has also played an important role in the Web 2.0 design. This change in design is revolutionary because it will alter the way humans store and share information.With Web 2.0, the domain where the information comes from is not that important. With this new interface, the Internet could be described as a platform that will be responsible for the interaction with content.
With Web 2.0, the Internet would become a place where users could build interfaces based on the information they receive from a variety of different organizations. The information could then be combined in a way that is superior to any one domain. As an example, Amazon.com is a company that makes its content available to anyone who wants to view it. However, those who visit Amazon.com can build their own customized page that has access to specific information that is of interest to them. This is important because the information can be personalized to meet the needs of the people who need it.
Once content becomes more personalized, this will create an online experience that is far superior to anything we have today. There are six key design aspects of Web 2.0 that will shape the future of the Internet. The first design aspect of Web 2.0 will be the XML transition. If Web 2.0 is to be a success, it is crucial for it to use a semantic markup. It must be able to define the content its applied to. The most prominent languages for display are XHTML, and tags can be used for styles through CSS. It should also be possible for designers to describe content, but this should only be done in a way that is consistent with the XHTML tags. The Potential of Web 2.0 One of the most powerful functions of Web 2.0 is the ability it gives users to rapidly build applications. Not only can Web 2.0 allow users to build applications within a short period of time, they can do so without having advanced technical knowledge. There are a number of reasons why this is revolutionary. By reducing the knowledge that is necessary for building applications, more people can construct them. It should first be emphasized that Web 2.0 is not a thing. It is more of a group of approaches, and these approaches can be designed in a way that will allow applications to be developed within a short period of time.
To understand how people are able to quickly build applications with little to know skill, you will first need to become familiar with APIs, or Application Programming Interfaces. The API for Google Maps will allow user to place dat on any place that the Google Maps can generate. As an example, someone can use an API to place crime statistics on Google map information. Because the interface for the Google API is so impressive, the people who develop these applications can place an emphasis on the data sources. The idea of adding an interface for programming is not new. Microsoft used APIs with its Office software products. However, these APIs were very complex.
Whenever Microsoft would have an update, the APIs would go through a large number of changes. Because of the complexity involved with the Microsoft APIs, it is uncommon to find applications that were built on them. This is where the new APIs from Google and Amazon differ. These APIs are far less complex, and this allows more people to use them. In addition to large organization like Google and Amazon, there are a number of small companies that are using APIs as well. The original inventors that were responsible for these APIs are those that specialized in object oriented units. Another important system that shows the potential of Web 2.0 is RSS.
With Web 2.0, RSS can be used as an interface. Because the Google API maps are so simple, many people are able to use it in conjuntion with it. RSS is known as Really Simple Syndication, and it is a computer generated file format that a website will use to communicate with other websites. By using this simple structure, RSS will allow developers to utilize data from a number of distant sources. One area in which the potential of Web 2.0 will be greatly witnessed is social networking. The power of social networking becomes very apparent when a user is given the ability to define their relationship with other users on the site.
In most social networking systems, the primary data that exists is yours and the other members. You are given the ability to browse your own information, and you can view the data that has been submitted by the other members. Despite this, adding social networking to the system allows a user to separate a group of people they know from those they don't know. As you can imagine, making this separation can make the social network highly valuable. Companies that specialize in social networks are expected to make a great deal of money once Web 2.0 is fully utilized. While this sytem is quite impressive, it still has a lot of work before it is ready for the public. |
No comments:
Post a Comment