Architecture patterns help define the basic characteristics and
behavior of an application. Different architecture patterns make different senses. To make the correct decision, developers should know about the characteristics, strengths, and weaknesses of each architecture pattern.
Layered architecture is the de facto standard for most Java EE applications and therefore is widely known by most developers, and has become a natural choice for most business application development.
Components within the layered architecture pattern are organized into horizontal layers, each layer performing a specific role within the application. For example, we can divide all source code into four layers:
- presentation layer
- business layer
- persistence layer
- database layer
More or less layers are acceptable, depending on the scale of your application. In fact there are various layering theories, but they share the same principles.
Furthermore, codes in each layer can be divided into different components. Components in each layer should have a separated concern（分离的关注点）. For example, codes in components of presentation layer must not handle with database operation (such as using SQL statement).
The feature of separation of concerns among components brings into layers of isolation, which means high changeability or even replaceability of each layers, for they are independent of the other layers. And we call these layers closed because every layer prevent its upper layer from knowing anything about its lower layers.
For some reasons, sometimes it makes sense to make a layer open. Opening a layer means that make a layer "translucent", otherwise a upper layer of the open layer may have knowledge of the lower layer of the open layer. Usually an open layer is a service layer, which provide some common supprots for its upper layer. It's necessary to document which layers are open and why.
Here are two things to consider when using layered architecture:
- sinkhole anti-pattern: This anti-pattern describes the situation where a call request flows through multiple layers of the architecture as simple pass-through processing with little or no logic performed within each layer. When too much call requests are perceived to fall into this anti-pattern, opening some layers is a nice solution
- A application of layered architecture pattern tends to lend itself toward monolithic applications
- Overall agility: low. Because of the monolithic nature of most implementations as well as the tight coupling of components usually found with this pattern
- Ease of deployment: low. This pattern is not designed specially for a continuous delivery pipeline
- Testability: high. Any layer can be mocked in a testing
- Performance: low. This pattern is not designed specially for high-performance
- Scalability: low. For its trend toward tightly coupled and monolithic implementations
- Ease of development: High. For it's a general-purpose and well-known pattern. And according to Conway's law, layered architecture correspond with the typical business company
The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events. This architecture consists of two main topologies: mediator topology and broker topology. Characteristics and implementation strategies differ between these two topologies.
A system of event-driven architecture takes every input as a event, and it'll be dispatched to a certain component somehow to be processed.
If you're dealing with events that have multiple steps and require some level of orchestration to process the event, here is mediator topology.
There are four parts in this type of architecture:
- event queues
- an event mediator
- event channels
- event processors
And this is how they work:
- Client send a request called initial event to an event queue
- The event mediator, which only knows of the steps required to process initial event, receives the initial event and generate additional asynchronous events called processing events for event channels
- Event processors which listen on the event channels receive events from event mediator and execute specific business logic to process the event. Each processor can complete its tasks without relying on other processors.
If your event processing logic is simple enough so that you don't want or need a central event mediator to perform orchestration, you can choose this broker topology.
There are two parts in this type of architecture:
- A broker, which contains some event channels
- Event processors
And this is how they work:
- An initial event created by a client will be dispatched to a event channel of the broker
- Each event processor is responsible for processing an event which it concerns, and may publish a new event for other processors to a event channel hold by the broker after it complete its task
- Make sure to make a good design of granularity of event processors. Remember that an undivided transaction must not be processed by separate processors
- Use a standard data format (like JSON ) in the communication among event processors, and establish a contract versioning policy right from the start
- Overall agility: high. Event-processor components are single-purpose and completely decoupled from other event processors
- Ease of deployment: high. For its decoupled nature
- Testability: low. Though individual unit testing is not overly difficult, it does require some sort of specialized testing client or testing tool to generate events. Testing is also complicated by the asynchronous nature of this pattern
- Performance: high. Parallel asynchronous operations rules!
- Scalability: high. For its decoupled nature
- Ease of development: low. For its asynchronous nature
Softwares based on microkernel architecture are easy to append additional features as plugins.
This architecture consists of two types of components:
- a core system : contains only the minimal functionlity required to make the system operational
- plug-in modules: stand-alone, independent components that contain specialized processing, additional features, and custom code. Each module is independent of any ohter module generally, so it's important to design a common way of communication among plug-ins and with core system
This architecture can be embedded or used as part of another architecture. And it's the first choice when developing a software which will be adjusted frequently to respond to users' requirement.
- Overall agility: high. Plugin-in modules can be developed quickly and separately
- Ease of deployment: high, if plug-ins modules can be loaded dynamically
- Testability: high. Plug-in modules can be tested in isolation and can be easily mocked by the core system to demonstrate or prototype a particular feature with little or no change to the core system
- Performance: high, when discard plug-in modules that you don't really need
- Scalability: low. Scalability are provied in plug-ins' level rather than core system level
- Ease of development: low. The microkernel architecture requires thoughtful design and contract governance, making it rather complex to implement
Microservices architecture gains its big siginifiance in software industry. Theories about this architecture are still evolving, but basically it's a viable alternative to monolithic applications and service-oriented architectures.
There are several common core concepts that apply to the general architecture pattern.
- service component: Service components contain one or more modules that represent either a single-purpose function or an independent portion of a large business application. Designing the right level of service component granularity is one of the biggest challenges within a microservices architecture
- separately deployed units: Every service component can be deployed as separate unit
- distributed: All the components within the architecture are fully decoupled from one other and accessed through some sort of remote access protocol (like REST)
There are two main sources of motivation to this architecture: one is the challenge of the continuous delivery for monolithic applications, another is the practice of service-oriented architecture pattern (SOA, an architecture for huge scale applications).
API REST-based topology
The API REST-based topology is useful for websites that expose small, self-contained individual services through some sort of API. In this topology, these fine-grained service components are typically accessed using a REST-based interface implemented through a separately deployed web-based API layer.
Application REST-based topology
The application REST-based topology differs from the API RESTbased approach in that client requests are received through traditional web-based or fat-client business application screens rather than through a simple API layer.
The service components in this topology differ from those in the API-REST-based topology in that these service components tend to be larger, more coarse-grained, and represent a small portion of the overall business application rather than fine-grained, singleaction services. This topology is common for small to medium-sized business applications that have a relatively low degree of complexity.
Centralized messaging topology
The centralized messaging topology is typically found in larger business applications or applications requiring more sophisticated control over the transport layer between the user interface and the service components. The benefits of this topology over the simple REST-based topology discussed previously are advanced queuing mechanisms, asynchronous messaging, monitoring, error handling, and better overall load balancing and scalability.
The single point of failure and architectural bottleneck issues usually associated with a centralized broker are addressed through broker clustering and broker federation (splitting a single broker instance into multiple broker instances to divide the message throughput load based on functional areas of the system).
- Think carefully before determining the correct level of granularity for the service components. Too fine-grained will lead to service orchestration requirements while too coarse-grained makes it no more "microservice"
- Use shared databases sometimes to reduce the couplings between components
- Functionality reusing may be a problem. Sometimes you have to reuse by copy codes
- Overall agility: high. Born for it
- Ease of deployment: high. Born for it
- Testability: high, due to the separation and isolation of business functionality
- Performance: low, due to the distributed nature of the microservices architecture pattern
- Scalability: high.
- Ease of development: high
The space-based pattern (also sometimes referred to as the cloud architecture pattern) minimizes the factors that limit application scaling. Most applications that fit into this pattern are standard websites that receive a request from a browser and perform some sort of action.
There are two primary components within this architecture pattern:
- a processing unit: Typically contains the application modules (as well as backend logic), along with an in-memory data grid and an optional asynchronous persistent store for failover. Also contains a replication engine that is used by the virtualized middleware to replicate data changes made by one processing unit to other active processing units
- virtualized middleware: Handles housekeeping and communications. It contains components that control various aspects of data synchronization and request handling. Included in the virtualized middleware are the messaging grid, data grid, processing grid, and deployment manager. The virtualized middleware is essentially the controller for the architecture
There are four main architecture components in the virtualized middleware. We'll talk below.
When a request comes into the virtualizedmiddleware component, the messaging-grid component determines which active processing components are available to receive the request and forwards the request to one of those processing units.
The data grid interacts with the datareplication engine in each processing unit to manage the data replication between processing units when data updates occur. Since the messaging grid can forward a request to any of the processing units available, it is essential that each processing unit contains exactly the same data in its in-memory data grid.
An optional component within the virtualized middleware that manages distributed request processing when there are multiple processing units, each handling a portion of the application.
This component continually monitors response times and user loads, and starts up new processing units when load increases, and shuts down processing units when the load decreases.
Although the space-based architecture pattern does not require a centralized datastore, one is commonly included to perform the initial in-memory data grid load and asynchronously persist data updates made by the processing units. It is also a common practice to create separate partitions that isolate volatile and widely used transactional data from non-active data, in order to reduce the memory footprint of the in-memory data grid within each processing unit.
- Overall agility: high. Born for it
- Ease of deployment: high. Born for it
- Testability: low, especially in the scalability aspects of the application
- Performance: high. Born for it
- Scalability: high. Born for it
- Ease of development: low. Sophisticated caching and in-memory data grid products make this pattern relatively complex to develop