Client-Server Trends for 2007
The Connection Between Computers and Network Applications
Client-server technology describes the relationship between computers and programs on a network. For the purposes of this article, unless otherwise specified, clients and servers will refer to programs and it is important to remember that these programs can function partially during a request. In traditional client-server technology, a client initiates a request from a server. The server processes the request and delivers the response to the client. This process can take place within a single machine that contains both client and server programs.
However, the true strength of client-server technology exhibits itself when applied to network programming. Through networking, client-server models can communicate with and integrate applications that are distributed in different locations. This dispersal makes use more efficient by only requiring applications to run when requested. As one of the foundational technologies for network programming, most business applications are written to be used in client-server environments. Although perhaps the most convincing advertisement for client-server technology is that it is the primary organizing principle behind the Internet. TCP/IP is the main program for the Internet. It stands for Transfer control Protocol/Internet Protocol and it is the primary protocol structuring communication over the Internet. In traditional client-server technology one server awaits the requests of multiple clients. This server is also called a daemon if it runs continuously and forwards requests to appropriate programs or processes. It runs by itself under the operating system. For example, each web page server contains an HTTPD (Hypertext Transfer Protocol Daemon) that waits for requests from web page clients and users.
Drawing from the most powerful example of client-server technology, the Internet, provides an example of how this technology is used by most people. In this configuration, the web browser functions as the client program requesting web pages (sometimes including light processing) from a web or HTTP server, which is located elsewhere. TCP/IP also makes it possible for clients to request files from other computers connected to the Internet through FTP or File Transfer Protocol.
This article will discuss the development of client-server technology with an eye towards upcoming trends.
Architecture
Traditional client-server technology has been two or three tier. The first generation of client-server architecture was two tiered. These were usually organized with ‘fat’ clients and ‘thin’ servers. This means that most of the processing occurred in the client, while the server returned query results from a database through dynamic SQL (Standard Query Language). Interfaces are CLIs (Call Level Interfaces) that standardize SQL so requests pass directly to servers without needing to be recompiled. This architecture can be inverted by opting for a thin client and a fat server. This configuration improves performance. Procedures are stored on the database, which usually has more power than a client. However, in both two tier models, the request is presented by the client, processing is divided between client and server, and the server accesses stored data and delivers responses.
Three tier client-server architecture is divided into three layers: presentation layer, function layer, and data layer. The middle tier or application server makes up the function layer and is usually written in portable, open languages like C, C+, or C++. These can be multi-threaded which allow segments of a program to be run concurrently. Threads of execution are similar to processes. They are sequences of code that provide instruction and processing. Multi-threading allows tasks to run simultaneously and be executed in parallel. The client usually communicates with the middle tier through APIs (Application Programming Interfaces) or RPC (Remote Procedure Call) protocols. Standard database protocol structures communication between the middle tier and data server. The middle tier contains the majority of the application logic and translates queries and responses so that they are intelligible to both endpoints. Three tiered models offer more flexibility by not requiring that clients and servers use the same language. The middle layer translates between them. They also allow for more sophisticated allocation of resources. Also, because middle tiers are modular in design, modules can be applied to and re-used by different applications.
The latest developments in client-server architecture are toward N-tiered or multi-tiered models. This allows the middle tier to connect to a variety of services. It facilitates integration and coupling between clients and multiple servers.
Since web applications are becoming more popular, N-tiered architecture can be used to manage increased network complexity. For example, a client sends a request to a web server that communicates with an application server, which forwards the request to the data server. The web server takes on some of the processing by running CGI (common Gateway Interface) scripts that support dynamic content. They also parse or break down and validate requests. They assemble formatted responses from the database server and return responses in ways that the client can understand according to the query.
Development of Distributed and Peer-to-Peer Computing
Truth be told, the biggest trend in client server technology is a move away from it. The current trend sees networking models moving toward distributed and peer-to-peer computing.
Distributed computing allows parts of a program to run simultaneously on multiple computers that communicate through a network. Distributed computing not only separates processing by running different application parts simultaneously, it must also take into consideration the different environments in which these parts will be operating. Several client-server technologies lend themselves to distributed computing, such as RPC (Remote Procedure Call), RMI (Remote Method Invocation) and .NET Remoting. The goals and advantages of distributed computing include the following: openness, transparency, and scalability.
Distributed function processing is the most complicated form of application building. Functions are divided between clients and servers. It takes a lot of work for developers to decide where each function belongs and what type of communication must occur between programs and locations. For the communication of data and dialog, distributed application client-server environments rely on message-based communication or RPC. The most important developments in distributed function processing are occurring through web services. The tools for developing this form of processing include XML-RPC (Extensible Markup Language-Remote Procedure Call), SOAP (Simple Object Access Protocol, the foundation most commercial web sites), UDDI (Universal Discovery, Description, and Integration), and WSDL (Web Services Description Language). This allows different programs to exchange data and use each other over the Internet without having to know how the other is run or connected.
Peer-to-peer or P2P computing is also threatening client-server dominance. It provides an alternative networking models where each endpoint has the same capacity and, unlike client-server models, each node can initiate communication. This gives each node both server and client capabilities. Rather than distinguishing endpoints, i.e. client and server, peer-to-peer network computing focuses on applications being able to share files directly through the Internet or through some other mediating server. Corporations are seeing advantages in P2P models because they eliminate the cost of maintaining a centralized server.
Software Trends
The most important software trends in client-server technology revolve around system integration, testing, and application architecture. Enterprise Resource Planning (ERP) products help with system integration. ERP refers to multi-module application software that is typically integrated with a relational database. It facilitates business-wide processing from product planning, inventory maintenance, customer service, and order tracking. The leading companies producing ERP-related products are Oracle, SAP, Baan and PeopleSoft. Siebel, Clarify, and Relational Technology System’s Trilogy are developing sales force automation and front-end systems. Design and product management systems are also a concern for ERP. Sherpa and Parametric Technology Corps.’s Windchill are companies on the cutting edge of these systems.
Distributed computing and web services are increasingly replacing the desktop. Therefore, the most important skills in client-server building are software distribution skills and the management of tools related to application distribution. The development of client-server environment requires skills in Visual Basic and PowerBuilder. Visual Basic was developed by Microsoft and became the dominant language for building Windows applications because of its visual user interface. Currently, it has become a component of VisualStudio.NET. Visual Basic for applications provides a common language for Microsoft applications. PowerBuilder is an event-driven programming language used for RAD (Rapid Application Development). Both of these create integrated development environments.
The most important way to balance application performance, overall processing, and operation costs is the mix and match aspect of creatively combining different products for specific business needs. That way, resources are chosen to function within the capacity they are needed, efficiently. Software packages often include programs that just take up space on the server and are rarely used. C++ is a high-demand language for client-server development that functions in multiple paradigms. It joins high level programming with low level facilities. High level languages are more portable because they are more abstract. High-level refers to the level of abstraction away from machine language. They can adapt to different platforms and are capable of managing routines and object-oriented language. However, C++ also includes low-level facilities that closely interact with the hardware. They are also known as machine or assembly languages. Oracle and Access both need database designers and application designers for thin client processing.