One of principal objective of Client-Server methods is to provide data to an end user. However, Client-Server architectural methodologies are much more complex. Client-Server describes the process wherein a client program generates contact with a separate server for a particular reason on a networked system. The client, in these cases, is the requester for a service that the server will theoretically provide.
In the course of the past two decades, we have witnessed the evolution of large scale, complex information systems. During this period, Client-Server computing models have come to be accepted as the preferred means of architecture for the design and deployment of applications.
Client-Server Models serve as the foundation of current enabling technologies such as workflow and groupware systems.
It is for certain that in the future, Client-Server technologies will have a major effect on technological transformations. They have already had a major effect on a recent transformation, in which network computing sawed in half monolithic applications that were based on MainFrames and split them in to two components – Client and Server.
In the past, Client-Server systems have been associated with a Desktop PC computer, which is connected via a network to an SQL database server of some sort. In actuality, however, the term “Client-Server ” refers to a logical model that serves to divide tasks in to two layers marked either “Client” or “Server”.
Within the Information Technology sector, a very simple form of Client-Server computing has been practicing since the inception of the MainFrame; a Single-Tier (One-Tier) System, which consists of a MainFrame host connected directly to a terminal.
In Two-Tier Client-Server architecture, however, the client is in direct communication with the database. The application logic or business logic thus resides on one of two servers in the form of stored procedures.
The Client-Server Models initially emerged alongside the applications that were being developed for local area networks in the latter half of the ‘80s and the first half of the ‘90s. These models were mostly based on elementary file sharing techniques that had been implemented by X base style products, such as Paradox, FoxPro, Clipper, and dBase.
Fat Clients and Fat Servers
At first, the Two-Tier model required a non MainFrame as well as an intelligent fat client, which is where most of the processing would take place. This configuration was not very scalable and hence larger systems could not be accommodated. If fifty or more clients were connected, it would not function properly.
The GUI (Graphical User Interface), then came in to being as the most common desktop environment. Alongside the Graphical User Interface technology, a new form of Two-Tier architecture emerged. The LAN file server, used for general purposes, was replaced by a new, specialized database server. Thus, new development tools emerged; these included Visual Basic, Delphi, and PowerBuilder.
While a lot of the major processing operations still took place on the fat clients, datasets of info could now be delivered on to the client by utilizing Structured Query Language techniques as a means of performing requests from a database server. The server would then merely report the results of queries made.
The more complex the application becomes, the fatter the client subsequently gets. The client hardware thus must become increasingly powerful in order to be able to support it. As a result, the cost of adequate client technology can become rather prohibitive. It may in fact defeat the affordability of the application.
What is more, the footprint of the network utilizing the fat clients is incredibly large, think Bigfoot here, so there is an inevitable reduction of the network’s bandwidth as well as the number of users who can use the network in an effective manner.
Another approach often invoked in Two-Tier architecture is the thin client <-> fat server configuration. In this configuration, the user will invoke procedures that are stored at the database server. The fat server model gains performance in a more effective fashion, as the network footprint, while heavy, is still a lot lighter than the fat client method.
The negative side to this approach is that stored procedures focus on proprietary coding and customization because they rely on only one vendor’s procedure of functionality. What is more, as stored procedures tend to be buried deep in the database, every database containing a procedure has to be modified whenever there is a change made to the business logic. This can lead to major issues in the management arena, particularly when it comes to large distributed databases.
In either case, a remote database transport protocol (SQL-Net, for example) will be used to carry out the transaction. Such models require a heavy network process in order to provide mediation between the Client and Server. What is more, query transaction speed and network transaction size will both be reduced in light of the heaviness of such a transaction.
Regardless of which technique is employed, Client-Server systems were still not able to scale beyond a hundred users. This form of architecture tends not to be very well suited for mission critical applications.
Three-Tier Client-Server Architecture
In more recent times, a middle tier was added to Client-Server implementations, effectively creating a three tier structure. In a three tier or N-Tier atmosphere, the client implements the presentation logic. On application servers, the business logic shall be implemented. Data is left to be situated on to database servers.
The following three component layers define a Multi Tier (or N-Tier) architecture.
First of all, there is the front end component. This component provides portable presentation logic.
Secondly, there is a middle tier, which enables users to share business logic and control it by isolating it from the application in question.
Then there is the back end component. This component provides users with access to services like database servers.
Multi Tier architecture augments Two-Tier structures in the introduction of middle tier components. The Client system then works with the middle tier through such standard protocols as RPC and HTTP. The central tier interacts with the backend server through such standard database protocols as JDBC, SQL, and ODBC, among others.
The vast majority of the application logic is contained in the middle tier. It is here where client calls are translated in to database queries, and data from the database is simultaneously translated in to client data.
This positioning of business logic on the application server maximizes scalability as well as the business logic’s isolation, which effectively handles a business’s rapidly evolving requirements. More open choice of database vendors are then allowed for.
Three tier architecture can extend to N-Tiers. In the event that the middle tier is capable of providing connections to a variety of different services, while also integrating them and coupling them to the client, as well as to each other.
N-Tier Architecture
As Client-Server Models evolved throughout the decade, many Multi Tier architecture models began to appear, enabling computers on the client side to function as both clients and servers. Once software developers began to realize that smaller processes were a lot simpler to design, not to mention cheaper and faster to implement, then N-Tier models increased in popularity quite rapidly. The same principles applied to the client side were then applied to the server side. As a result, thinner, more specialized server processes evolved.
These days, N-Tier architecture seems to dominate the industry. The vast majority of new IS development is being created in the form of N-Tier systems.
It should be noted, however, that N-Tier architecture does not necessarily preclude the utilization of Two-Tier or Three-Tier models. Depending on the scale and requirements involved for a particular data, Two-Tier or Three-Tier models are very often used in departmental applications.
N-Tier computing is widely considered to be the most effective model these days, as it promotes integration of contemporary information technology in the form of a much more flexible model. It is widely believed that the percentage of applications utilizing an N-Tier model is going to grow four fold within the next two years.
Three-Tier and N-Tier systems are mainly able to do two things that 2 Tier systems are unable to do. They are the partitioning of application processing loads among several different servers, and the funneling of database connections. By centralizing application logic within the central tier, business logic can be readily updated by a developer without having to re-deploy an application to thousands of different desktops.
Distributed Processing
N-Tier computing attains a high level of synergy by combining different computer models and providing centralized common services in a single distributed atmosphere.
The multi level distribution architecture in question must rely on a back end host of some kind, an intelligent client, as well as several intelligent agents in order to control activities like Online Transaction Processing, message handling, transaction monitoring, etc.
Such forms of architecture tend to rely heavily on object oriented methodologies, which help to effect a maximum amount of interchangeability and flexibility.
TP monitors, distributed objects, and application partitioning tools can all contribute to spreading the processing load among a bevy of different machines, which supports an unlimited quantity of processing loads and users – quite a far cry from the Two-Tier architectural models of the past. Indeed, N-Tier is here to stay. At least for the foreseeable future.