WebSphere/ ObjectGrid architecture

■Introduction
ObjectGrid (OG) provides for both local, in-memory data caching and distributed coherent client/server data caching. Local and distributed coherent OG topologies both provide the same application programming model for interacting in the cache.

Distributed coherent OG caches offer increased performance, availability and scalability and can be configured using both static and dynamic topologies. The dynamic deployment topology is new for OG v6.1 and allows automatic balancing of OG servers. Additional servers can be added to an OG w/o restarting the OG. This allows for very simple or small deployments and for very large terabyte sized deployments where thousands of servers are needed. The static deployment topology is also available w/ all versions of OG and uses a declarative approach to defining the topology.

The following topics are discussed in this section:
-ObjectGrid
-Map
-Schema
--Map Schema
--Entity Schema
-Distributed ObjectGrid concepts
-Dynamic deployment topology
--ObjectGrid container
-Map set
--Partition
--Shard
---Primary Shard
---Replica Shard
----Synchronous Replica
----Asynchronous Replica
--Catalog Service
---Location Service
---Placement Service
---Core Group Manager
---Administration
-Static deployment topology
-Choosing a dynamic or static deployment topology


ObjectGrid
An OG is a logical container for application state. It can be physically mapped to a single JVM or a thousand server grid spread over multiple data centers. Each OG has one or more maps defined. In a distributed environment, maps are organized into map sets.


■Map
A Map is a container for key/value pairs. It allows an application to store a value indexed by a key. Maps support indexes that can be added to index attributes on the value or pieces of the key. These indexes are automatically used by query to determine the most efficient way to execute a query.


■Schema
A map set can have a schema associated with it. A schema is the meta data that describes the relationships b/w each map (when using homogeneous Object types) or entity.

ObjectGrid
|
|-Map Set 1 With Following Shema
|--Map1→Map2⇔Map3
|
|-Map Set 2 With Following Shema
|--Map1→Map2⇔Map3←Map4→Map5


■Map Schema
TheOG can store seializable Java objects in each of the maps using the ObjectMap API. A schema can be defined over the maps to identify the relationship b/w the objects in the maps when the maps hold objects of a single type. Defining a schema for maps is required to query the contents of the map objects. An OG can have multiple map schemas defined.


■Entity Schema
The OG can also store entities using the EntityManager API. Each entity is associated with a map. The schema for an entity map set is automatically descovered using either an entity descriptor XML file or annotated Java classes. Each entity has a set of key attributes and set of non-key attributes. An entity can have relationships to other entities. OG supports one to one, one to many, many to one and many to many relationships. Each entity is physically mapped to a single map in the map set. Entities allow applications to easily have complex object graphs that span multiple Maps. A distributed OG can have only one entity schema.


■Distributed ObjectGrid concepts
Distributed OGs require minimal additional infrastructure to operate. The minimum infrastructure is some scripts to install, start and stop a J2EE application on a server. OG servers are used to store the cached data and clients remotely connect to the OG servers.


■Dynamic deployment topology
New for OG v6.1 is support for plug-in-play type configuration. The dynamic configuration capability of the OG makes it easy to add resources to the system. We introduce an OG container to host the data and the catalog service as the touch point for the grid. The former is responsible for maintaining the data and the latter is responsible for forwarding requests to the right place on first touch, allocating space in host containers, and managing health and avilability of the overall system.

Clients connect to a catalog service, retrieve a description of the OG server topology and then communicate directly to each OG server as needed. When the server topology changes due to the addition of new servers, or due to the failure of others, the client is automatically routed to the appropriate server that is hosting the data.

In the following diagram, many of the possible deployment combinations are illustrated
-A catalog service typically exists in its onw cluster of JVMs. A single catalog service can be used to namage multiple OGs.
-A container can be started in a JVM by itself or can be loaded into a arbitrary JVM with other containers for different OGs (for example, an application server JVM).
-A client can exist in any JVM and talk to one or more OGs. A client can also exist in the same JVM as a container.

diagral: http://www.ibm.com/developerworks/wikis/download/attachments/1935588/ogarch_dyntopology.jpg


■ObjectGrid container
The container is a service that hosts application data for the grid. This data is generally broken into parts, called partitions and hosted across multiple containers. So each container in turn hosts a subset of the complete data. A Java Virtual Machine (JVM) may host one or more containers and each container can host hultiple shards.


■Map set
A map set is a collection of maps with a common partitioning key. The data within the maps are replicated based on the policy defined on the map set. A map set is only used for distributed OG topologies and is not needed for local OGs.


■Partition
A Partition hosts asubset of the data in the grid. Think of this like a drawer in a file cabinet. Say you had employee records in a two drawer file cabinet with A-M in the upper drawer and N-Z in the bottom drawer. In OG terms, this file cabinet would be a grid with two partitions: One partition hasts Employees A-M, the other N-Z. Of course, you may need a larger file cabinet to hold all of the files. So you may have a 26 drawer file cabinet one for each letter. This is fine even when you only have two containers as the OG will automatically put multiple partitions in a single container and then spread them out as more containers become available.


■Shard
A Shard is an instance of a partition and has one of two roles: primary or replica. The promary shard and its replicas make up the physical manifestation of the partition. This means that they each host the full set of data for the partition redundantly.

□Primary Shard
A primary shard is the only partition instance that allows transactions to write to the cache.

□Synchronous Replica
A synchronous replica shard receives updates as part of the primary's transaction to guarantee data consistency. A synchronous replica can double the response time as the transaction has to commit on both the primary and the synchronous replica before the transaction iscomplete.

□Scynchronous Replica
An asynchronous replica shard receives updates after the transactioncommits to limit impact on performance but introduces the possibility of data loss as the asynchronous replica can be several transactions behind the primary.


■Catalog Service
The Catalog responsibilities are broken uop into a series of service. Locality is managed by the Location Service; allocation is done through the Placement Service; peer grouping for health monitoring is done by the Core Group Manager; and there is a service that provides access to administration.

JVM
|--Catalog Service
| |--Location Service
| |--Placement Service
| |--Core Group Mgr
| |--Administration

The Catalog Service hosts logic that should be idle during steady state and as such has little influence on scalability. It is built to service hundreds of containers becoming available simultaneously. For availability, the catalog service should be configured into a cluster.

Catalog Service Cluster
|--JVM
| |--Catalog Service...
|--JVM...

□Location Service
The Location Service acts as the touch point for both clients looking for the containers that host the application they seek as well as for containers themselves looking to register hasted application with the Placement Service. The Location Service runs in all of the cluster members to scale out this function.

□Placement Service
The placement service is the central nervous system for the grid. This service is responsible for allocating individual shards to their host container. It runs as one-of-N elected service in the cluster so there is always exactly one instance of the service running. If that instance should fail, then another process gets elected and takes over. All state for the catalog service is replicated across all servers hosting the catalog service for redundancy.

□Core Group Manager
The Core Group Manager is a fully automatic service responsible for organizing containers in to small groups of servers that are then automatically loosely federated to make an OG. When a container firstcontracts the catalog service, it waits to be assigned to either a new or existing group of several JVMs. An OG consists of many such groups and this grouping is a key scalability enabler. Each group is a group of JVMs that monitor each others availability through heart beating. One of these group members will be elected the leader and have an added responsibility to relay availability information to the calalog service to allow for reacting to failures by re-allocation and route forwarding.

□Administration
As the central nervous system of the grid, the catalog service is also the logica lentry point for system administration.


■Static deployment topology
All versons of OG support a static deployment topology, where the OG server topology is defined in a cluster descriptor XML file. There is no catalog service needed as each primary, replica and partition is explicitly defined in one central file.


■Choosing a dynamic or static deployment topology
The dynamic deployment topology is the most flexible, as it allows adding and removing OG servers to better utilize resources w/o tearing-down the entire cache. This is accomplished by using the catalog service which automatically manages the assignment of OG servers to the active containers. All dynamic deployment topology clients communicate to the catalog service and OG servers through IIOP.
Although a static deployment topology is fixed, it doesntrequire a catalog service and allows fixed placement of the OG data. The location of each primary, replica and partition is explicitly defined in the cluster deployment XML. All clients communicate to the OG servers through TCP/IP using the OG.
The dynamic deployment topology will continue to be enhanced to support a variety of deployment scearios and shourld be strongly considered over the static deployment topology.



ref:
WebSphere eXtreme Scale V6.1 User Guide
ObjectGrid architecture
http://www.ibm.com/developerworks/wikis/display/objectgridprog/ObjectGrid+architecture?decorator=printable

tag : WebSphere ObjectGrid architecture OG

2009-03-06 05:13 : apsrv : コメント : 0 : トラックバック : 0 :
コメントの投稿
非公開コメント

« next  ホーム  prev »

search

ad



counter


tag cloud

category cloud