Jul 312013
 

I was thinking about the process of writing the master thesis report. You know, it is not the funniest part of the project but it is still are mandatory to be done.

1. Start with the questions you would like to answer with your thesis. Either it is a well known problem or something new that you just come up with, try to criticize and find the questions that you will be able to answer with your thesis. Obviously, don’t ask questions that are impossible to answer, aka. is there a life in another galaxy?…

2. Write a preliminary conclusions and check whether the questions is answered in the conclusions.

3. Use the before defined questions as a start for the problem definition. After this it should not be  a problem to define and properly describe set of limitations

4. Use you “stuck” time properly. As writing the final report is the greatest pain in the rare part of your body, try to fill the background and related work as soon as you read new papers.

5. Constantly through the whole project rewrite and rethink your abstract. Yes, write it before having anything done. This will help to stay more focused and go directly to the point during the process.

6. Plan you experimental part after related work, background and “kinda ok” system description is ready. Proper selection of experiments is the key to success. Look at the related paper evaluation parts. Be coherent with the comparison data for you project.

7. Talk to your supervisor… I wish I might have done it more often.

8. Use Mendeley or Zotero. It helps so much to keep all your references and notes to the papers in one place and even categorised by the directories. Apart from Zotero, Mendeley allows you as well add all the pdfs you have on the disk so you will always have an access to the full text that could be annotated and stored in Mendeley. So my personal choice is Mendeley.

9. The easiest way to have something innovative done is to either reimplement already existing and improve it or find out missing functionality in something and implement it. No matter what you are doing, the identification of the problem and how/why you decided to work on it should be considered to be included in your introduction.

10. Do all the preliminary presentation properly. Does not matter what you are selling, important is how you do it!

Honestly, I am not sure if I followed all this advices but what I definitely did and what helped me a lot is to procrastinate by writing a report 🙂 Wish you similar procrastination and good luck 😉

May 202013
 

The purpose of this post is to reveal the system organization and properties.

GDSSystemArchitecture2

Figure above shows some concepts of the system design and demonstrate functionality that is covered by the system. The GDS (Genium Data Store) system design can be captured as a set of interactive layers as presented on thefigure. The main idea of this figure is to highlight multilayer organization of the system where each of these layers serve it is own purpose and which are separated between each other. The lowest two level establishes communication between nodes in the system. Nodes are both clients and data stores. Each node, when joining the system, declare its status and add itself to corresponding subscription group. There are several subscription abstraction, among them client, sequencer.

To maintain the total ordering a special subscription group is reserved: sequencer group. Over the messaging middleware a distributed component is places. It support the data replication which guarantee the scalability and availability by means of traffic reduction over the components. On top of replication layer a data store operation layer is placed which (a) support a wide range of operation over data, e.g., insert, update, read, range queries; (b) frame client messages with necessary information needed to access the stores, hence, resolving concurrency conflicts; (c) apply a snapshot mechanism to allow safe range query re-request.

These infrastructure makes it easy to maintain and control the system. Relying on the INET messaging provide a great advantage to prevent all possible inconsistencies and conflicts.

Functionality

The basic functionality provided by the GDS composed from distributed, consistent, fault-tolerant and scalable infrastructure that serve simple requests over data. Among the requests are the following: insert, get, range queries. In order to make a request, the client communicates with storage part through the provided API. Each data store processes only those messages that belong to its partition; therefore, all information about the partitioning is stored on the sequencer to keep track on the number of replicas that serve the data.

With this functionality is it possible to:

  • Store/Retrieve the data
  • Provide consistency, availability, scalability and high performance
  • Leverage the high-performance message bus and in-memory datastore
  • Eliminate a need for highly scalable storage hardware

Data and Query Model

GDS presents a column oriented data store at the first place with the further extension to any data base provider. This made simple, as adding new database schemas and tables into the system are relatively easy and can be plugged by the API for the Data store. Schemas are not flexible: new attributed can not be added at any time but only at creating the table, as the data is stored in a fixed size column fashion.. Moreover, each data must by marked with a timestamp, to speed up further read requests and avoid inconsistencies during the updates. The timestamp for an update is serves as a version, which should be checked before making an update and this way, a timestamp consistency is guaranteed.

The query language of GDS supports selection from a single table. Updates must specify the primary key, similar to PNUTS. Single table queries provide very flexible access during range requests compared to distributed hash or ordered data stores, while still being restrictive compared to relational systems.

Read Query Support

Adaptation of the NoSQL data stores to the relational ones keeps the need for range queries. This functionality is sufficient to further maintain data processing and analysis in offline mode. In the trading environment, support for the time range querying is very important, as further, transactional and analytic processing of data are required. Main use cases are logging, extracting order history, price history, index calculation etc. All these usages dictate the necessity for the range query support.

Moreover, it can be a backbone for an stable way of analyzing the data “on the fly”.

There is an extensive set of works on exploring and evaluating range queries. Among the most common solutions to support range querying is special hash function usage, that preserve locality, different distributed index structures, like trees.

GDS relies on the data locality and timestamp index which is added either by the user or data store automatically. Used data store assures that each record timestamped and therefore, look up can be improved by specifying approximate time range. Data in the store is divided into chunks, each around 100 000 records. Each chunk is indexed according to the timestamp. Records in the chunk is time indexed. This level of separation significantly reduces information lookup time.

It was decided to apply some limitation on the range query response size. Main reason for that is an availability of the system, which could degrade under transmission of unlimited size range responses. The limit is set to maximum L = 10 000 records, which is around 5MB. When the query request is processes the information on the quire size is reported to the client. If the response exceeds L, only the L first records is transmitted to the client. If it is necessary a new additional request can be issued to retrieve missing records.

To guarantee consistency in case of additional request a simple snapshot mechanism is triggered and snipped below. The same procedure is done to guarantee consistency during the failure of TCP connection that transmit the response.

Snapshot mechanism works as follows:

on_range_request:
   send(type = SNAPSHOT, empty message) // Append SNAPSHOT message append to the end of current store
   retrieve(query) // Read the data from the store
   send(response directly to client)
   if (failure || limit_for_response is exceeded)
      retrieve data untill the snapshot point is reached

Snapshot mechanism is only used for the logging use case. Approach from this snippet guarantees that range query response will be equal whenever it is requested. This implies only due to the absence of update operation on the time oriented data schema.

Towards Consistency, Availability and Speed

The design of a system that needs to operate in a production and within strong SLA requirements of NOMX is complex. The system needs to have scalable and robust solution for failure recovery, replica synchronization concurrency and request routing. The servers must be resilient to many kinds of faults ranging from the failure of individual disks, machines or routers. GDS uses active replication, based on the produced by sequencer totally ordered stream of messages, to achieve high availability and a consistent view of the data. Shortly, it produces fully serializable ACID semantic over the data store.

To do so, the following is used:

  • for consistency, reliable totally ordered stream of messages produced by sequencer is used;
  • for availability, a highly robust and fast NOMX message bus is used to support a great number of incoming operations and active replication is implemented to reduce the load from the single replica;
  • for speed, a highly robust and fast NOMX message bus is used.

It is not hard to notice that all, consistency, availability and performance, depend on NOMX message middleware. This subsystem, which various functionality, leverages sustainable behavior of the GDS system, is very critical.

Low Latency

Latency is a critical part of the production oriented system architecture. However, making latency a first order constraint in the architecture is not very common. As the result systems are usually heavily influenced by the failure resilience, availability, consistency problems etc.

The main question here is how to design a system that is oriented towards latency. A few reductions for the system requirements on the aggressive production environment are done:

  • GDS applications does not require wide range deployment
  • Localized disasters are not taken into account, however it could be adjusted be adding site replication

Here are the following steps on the way to the speed:

  • Lightweight Sequencer. The sequencer in the system has a limited functionality and his main functions reduced to assigning a sequence number to messages and forwarding them to all subscribers. Moreover, sequencer completely isolated from the incoming message content; however, it can add additional information to the message, such as, sequenced number, other user information.
  • Good Decomposition. Decomposition of the application is very important during the design of any distributed application. GDS exposes relatively decent decoupling in the system with several levels and components. The roles in the system are sequencer, clients, data stores. All of them replicated and easily replaceable. Moreover, a layer of abstraction is placed under both clients and data stores, which manages registration, communication with sequencer and makes it transparent for both clients and stores.
  • Asynchronous Interactions. All interaction in the system is based on a well-known event-driven paradigm and rely on the asynchronous communication using UDP. The underlying messaging system, that uses MoldUDP, made the communication reliable. Moreover, if the necessity to rely on synchronous API appears, it is very easy to maintain it from the asynchronous API.
  • Non Monolithic Data. The whole system is supposed to be stored in the column oriented storage and partitioned both by range and hash for different data sets, respectively. This gives the effect of highly decomposed data without any need to perform join, which are not supported by the system.
  • Low Latency Reliable Totally Ordered Message Bus. To improve the performance a highly scalable and fast NOMX messaging middleware was leveraged in many ways.
  • Effective programming techniques. Following the advises from the [Effective C++, Java], GDS was build to reduce all possible overheads from the initialization, communication, garbage collection.

Summary

GDS ia a unique distributed system build on top of the reliable total order multicast messaging middleware developed in-house by NOMX. It is build to serve a large amount of requests per second and perform it fast, with consistency, fault-tolerance and availability in mind. Moreover, it is supplemented with a performance of the NOMX messaging system.

A wide set of operation is supported over the data, such as insert, read, range query, update. Moreover this set is spread over two different data sets: immutable log and mutable object records, which are actively replication by the total order stream of messages from the sequencer. Over the immutable data two types of operation are supported: insert and range query. Mutable data supports three operations: insert, update and get. First subset is made reliable by the extra fault-resilient, e.g., link failure. Second subset provides resolution for the concurrent updates, e.g., timestamp consistency. Depending of the data type, the data is partitioned either by range or hash, respectively, to guarantee the maximum performance of the subset operation.

Further chapters describe the architecture of the system and show the proof of concept for performance, scalability and failure resilience properties of the prototype system.

 

 

Apr 122013
 

Multicast operations are the operations that are sent from one process to a set of processes and the membership of a group is usually transparent for a sender [1]. However simple multicast protocol does not guarantee any ordering or message delivery. Therefor, stronger assumptions should be made in a frame of the nowadays distributed systems, such as, reliability. Some systems [5] relies on the reliable multicast, in which any transmitted message is either received by all or none processes. In other words, there could not be a situation where a client accesses a server just before it crashes and observe an update that no other server will process. This property is called uniform agreement. Moreover, to maintain a consistent and fault-tolerant system a total order assumption should be made additionally to reliable uniform multicast.

The simplest specification of the uniform reliable total order multicast can be defined in terms of two primitives [2], which are TO-multicast(m) and TO-deliver(m), where m is some message. When a process issued a uniquely identified message m as TO-multicast(m), this assumes following properties [3]:

• Validity. If a correct process TO-multicast a message m, then it eventually TO-delivers m.

• Uniform Agreement. If a process TO-delivers a message m, then all correct processes eventually TO-deliver m.

• Uniform Integrity. For any message m, every process TO-delivers m at most once, and only if m was previously TO-broadcast by the sender.

• Uniform Total Order. If two processes,p and q, both TO-deliver message m and m’, then p TO-deliver m before m’, if and only if q TO-delivers m before m’.

If all these properties satisfied then reliable total order multicast takes place. Uniformity in the system is presented as not allowance to deliver a message out of order by any process at any time.

Internally in NOMX, multicast communication is used for most of the subsystems as it is the only fast and reliable way to guarantee consistency and agreement within all nodes with minimal cost.

Although there are three main ways to maintain total order, e.g., symmetric messaging, collective agreement [Birman and Joseph 1987], sequencer based [Kaashoek 1989]. The system that I am developing within my master project uses the single sequencer ordering mechanism as the more efficient in comparison to the consensus one. The simpliest presentation of the total order ordering is illustrated on the picture down. This figure shows that no matter when the messages were issued they will be delivered in the same order to all the processes. For the sequenced mechanisms the main problem is a possible bottleneck and critical point of failure in sequencer part. Moreover, sequencer may limit the scalability of the system. It can be overcomes using the replicated standby sequencer that is delivers all messages issued by the primary one and takes over in case of failure.

TotalOrderBG

References:

[1] George F. Coulouris, Jean Dollimore, and Tim Kindberg. Distributed Systems: Concepts And Design. Pearson Education, 2005. ISBN 9780321263544.

[2] Xavier Défago, André Schiper, and Péter Urbán. Total order broad- cast and multicast algorithms: Taxonomy and survey. ACM Com- put. Surv., 36(4):372–421, December 2004. ISSN 0360-0300. URL http://doi.acm.org/10.1145/1041680.1041682.

[3] Vassos Hadzilacos and Sam Toueg. A modular approach to fault-tolerant broad- casts and related problems. Technical report, 1994.

[4] L. E. T. Rodrigues, H. Fonseca, and P. Verissimo, “Totally ordered multicast in large-scale systems,” in , Proceedings of the 16th International Conference on Distributed Computing Systems, 1996, 1996, pp. 503–510.

[5]

S. K. Kasera, J. Kurose, and D. Towsley, “Scalable reliable multicast using multiple multicast groups,” SIGMETRICS Perform. Eval. Rev., vol. 25, no. 1, pp. 64–74, Jun. 1997.
Mar 042013
 

It is quite a preliminary version of the problem description, i.e. motivation.

Again, any comments are more than welcome 🙂

Problem description

There are many existing distributed systems (DS) which are focused on optimization of the various systems properties, e.g. availability, robustness, consistency. Designing of a distributed data storage and data processing system for real time stock exchange environment is quite challenging and should meet strict SLA requirements. Current general purpose solutions are eager to sacrifice some properties in order to achieve great improvements in the other ones. Moreover, none of them leverages a uniform reliable total order multicast properties [] to supply fault-tolerant and ACID properties for the data operations. (Here a few paragraphs with some basic classification of DSS and their solution focus).

However, despite algorithmic advancements in total order broadcast and the developments of distributed database replication techniques based on it, limited research on applying these algorithms for large-scale data storage and data processing systems exists. (Here are a few sentences about total order algorithms and its application). Limited application in the real-time large-scale systems might be due to the previous scalability issues of the messaging systems, which was limited to the messaging bus capacity.

We are proposing a system, based on the NASDAQ OMX low latency uniform reliable totally ordered message bus, which is highly scalable, as the capacity of the message bus exceeds 2 million messages per second, available, and consistent. This messaging abstraction interprets unordered incoming stream of data into an ordered sequence of operation which are backed up by rewinders and therefore message gap-filling mechanism automatically supported and served by them. An ordered stream of data is published on the, so called, “message stream” and is seen by everyone on the stream. Based on this message bus, optimistic delivery can be assumed. In other words, an early indication of the estimated uniform total order is preserved and it is guaranteed to commit eventually all messages in the same order to all subscribed servers.

The main focus of this work is the leverage of reliable total order multicast protocol for building real time, fault-tolerant, ACID and low-latency distributed data store system. The major difficulty is to be able to guarantee fault-tolerance, availability for the system and ACID properties for the data operations. Moreover, supporting system in real time is challenging and maintaining distributed read queries and concurrent updates is no straightforward endeavor. To reach the performance goals, the following approach is applied:

  • Scalability: Adding extra instances on the stream is very easy. Therefore, the only thing that is required is to declare schemas and tables that are served by the data store.
  • Availability: Ability to serve request at any given time is provided for both simple operations and queries. First, capacity of the message bus can handle simple operations without extra tweaks. Second, read queries responses are sent directly to the requester and are served by the fastest data replica.
  • Consistency: As the underlying message passing abstraction produces a uniform reliable totally ordered stream of requests, each instance sees exactly the same sequence of messages. This gives a consistent view by any instance at any request time. Similarly for concurrent updates, totally ordered timestamps per update are used, hence timestamp concurrency control [] is deployed.
  • Fault-Resilience: As absolutely equal stream of requests are received by any of the replica, this way, failure of any instance during simple operations is not important. Failure of the data store during the query serving is handled by the simple snapshot indication message on the message stream. This way the query can be requested again from the fracture place.
  • Read Query Support: In order to increase the availability level, limitation on the query response is set. If the extension of the response is required, the query should be submitted again.

 

Mar 022013
 

I think it is kind of time to start working on the report draft 🙂

Here is first version of an abstract for my project report. Any commects are more that welcome!

Abstract

In recent years the need for distributed, fault-tolerant, ACID and low latency data storage and data processing systems has led the way for new systems in the area of distributed systems. The growth of unbounded streams of data and the need to process them with low latency are some of the reasons for such interest in this area. At the same time, it was discussed that a total order algorithms is a fundamental building block in construction of a distributed fault-tolerant applications.

In this work, we are leveraging NASDAQ OMX low-latency uniform reliable totally ordered message bus with a capacity of 2 million messages per second. The ACID properties of the data operations are easily implemented using the messaging bus as it forwards all transactions in reliable total order fashion. Moreover, relying on the reliable totally ordered messaging, active replication support for fault handling and load balancing is integrated. Consequently, the prototype was developed using requirements from a production environment to demonstrate its feasibility.

Experimental results show that around 250 000 operations per second can be served with 100 microseconds latency. Queries response capacity is 100 Mbps. It was concluded that uniform totally ordered sequenced input data can be used in real time for large-scale distributed data storage and processing systems to provide availability, consistency and high performance.

Mar 012013
 

GENIUM Data Store (GDS) is a system I am working on for my Master thesis in NASDAQ.

GENIUM INET messaging bus

To achieve performance goals, GDS leverages the high-performance of GENIUM INET messaging bus. GENIUM INET messaging bus is based on UDP multicasting and is made reliable by totally ordered sequence of backed up messages with a gap-fill mechanism using rewinders. As it is well known, a total order broadcast algorithms are fundamental building blocks for fault-tolerant systems construction. The purpose of such algorithms is to provide a communication primitives that allows processes to agree on the set of messages and order that deliver. NASDAQ OMX implementation of this abstraction assumes a perfect failure detector, e.g. it forces a process to fail if it was considered faulty. Moreover, uniform reliable total order is preserved, where a process is not allowed to deliver any message out of order, even if it faulty.

The receivers of the ordered messages should guarantee exactly-once delivery to the applications for each message, this way uniform integrity is guaranteed. Across the cluster of applications/clients and servers connected to the message stream, messages should be delivered in the same order.

The message stream can be configured without restriction, so that it can contain any number of various clients that can be placed on the same or different server, with defined replication level.  The server/sequencer is responsible for the sequenced/ordered stream creation as an output from received clients’ messages.

Failure resilience is provided by the following:

  •  All message callbacks are fully deterministic and replayable (if single-threaded), as the incoming stream is identical each time it is received.
  • Replication can be adopted by installing the same receivers at multiple servers.
  • As long as the new primary rewound to the same point as the failed one, the message stream is sufficient to synchronize state.

GDS uses the privileges of the described above infrastructure and a priory maintain a fault-tolerant real-time system. Moreover, the distributed state is made consistent by adhering to the sequenced numbering implied by the message stream.

MoldUDP

In a GENIUM Data Store, transactions are submitted through the MoldUDP, therefor it ensures the lowest possible transaction latency.

MoldUDP is a networking protocol that makes transmission of data messages efficient and scalable in a scenario where one transmitter and many listeners are present. It is a lightweight protocol that is built on top of UDP where missed packets can be easily traced and detected, but retransmission is not supported.

Some optimization can be applied to make this protocol more efficient: (a) multiple messages are aggregated into a single packet – to reduce network traffic, (b)  caching Re-request Server is placed near remote receiver – to reduce the latency and bandwidth.

MoldUDP presumes that system consists of listeners, which are subscribed to some multicast groups, and server, which transmits on those multicast groups. MoldUDP server transmits downstream packets through UDP multicast to send normal data stream addressed to listeners. MoldUDP server sends heartbeats periodically to clients, so they can retrieve information about packet loss if it takes place. Moreover, listeners should be configured with IP and port to which they can submit the requests.

Note: message in this context is an atomic piece of information that is carried by the MoldUDP protocol from 0 to 64 KB.

SoupTCP

In GENIUM Data Store, read query support will be maintained. That is why, TCP-like protocol is intended to be used to stream the data to the client in response to the submitted query.

SoupTCP is a lightweight point-to-point protocol build on top of TCP/IP sockets. This protocol allows to deliver a set of sequenced messages from a  server to a client. It guarantees that the client will receive all messages sent from a server strictly in sequence even when failures occur.

Server functionality with SoupTCP includes: (a) clients authentication on login and (b)  delivery of a logical stream of sequenced messages to a client in a real-time scenario. Clients sends messages to a server which are not guaranteed to be delivered in case of failures. That’s why the client will need to resubmit the request to the server.

Protocol flow:

  • Client opens a TCP/IP socket to the server with login request.
  • If the login information is valid – server responds with accept and starts to send sequenced data.
  • Both client and server compute message number locally by simple counting of messages and the first message in a session is always 1.
  • Link failure detected by the hearbeating. Both server and client send these messages. Former is required to notify a client in case of failure to reconnect to another socket. Later is necessary to close the existing socket with failed client and listen for a new connection.
Feb 202013
 

I think it is a  good idea to keep things not only in mind but also here, so I could come back to it.

I decided to start from the todo list for my thesis project:

  • Finish simple prototype with simple functionality (append)  — 26.02 I hope
  • Test 🙂  — 28.02
  • Add some failure handling  + Finalize Update, Delete– 01.03
  • Test some failure scenarious 🙂  — 03.03
  • Add more complex features (query)  — 10.03
  • Test 🙂  — 13.03
  • Add data store — 15.03
  • Add failure resistence functionality  —  20.03
  • Test Test Test 🙂 + Test failure scenarious 🙂   —  22.03
  • Write… … …    —   25.03
  • Think on streaming application  —  26.03
  • Do a simple prototype of aggregation application over a stream of data  — 31.03
  • Test 🙂  —  03.04
  • Do some bulshit and write an application to visualize messages in the system 🙂  —  10.04
  • Feel happy about the visualization part  —  14.04
  • Write Write Write…

For now, these dates look a bit too optimistic, but I hope that it will fluctuate no more that 1 week.

Gonna be a tough month 🙂

Feb 182013
 

Any suggestion on classification and extra Distributed Data Stores to review are more than welcome 🙂

Recently there has been increasing interest in NOSQL data storage to meet the highly intense demand of the applications. Representative work includes Bigtable, Cassandra and Yahoo PNUTS. In these systems, scalability is achieved by sacrificing some properties, e.g. transactions support. On the other side, most prevailing data storage systems use asynchronous replication schemes with a  weaker consistency model, e.g., Cassandra, HBase, CouchDB and Dynamo use an  eventual consistency model. Conventional database systems provide mature and sophisticated data management features, ut have difficulties is serving large-scale interactive applications. Open source database systems such as MySQL do not scale up to required levels, while expensive commercial database systems like Oracle significantly increase the total cost of ownership in large deployments. Moreover, neither of them offer fault-tolerant synchronous replication mechanism which is the key piece to build robust applications.

Cassandra

  Review follows in the next posts… Some info here

Yahoo PNUTS

  Review follows in the next posts…

Combining the merit from both scalable data stores and databases, Genium Data Store (GDS) provides ACID guarantees with high scalability, fault-tolerance, consistency and availability. However in case of GDS, wide-area network semantic is not taken into account, as the range of applications, that will use GDS, do not require wide-area replication.

To guarantee consistency a few systems use Paxos to achieve synchronous replication, e.g. SCALARIS, Keyspace, Megastore.

Megastore

  Review follows in the next posts…

SCALARIS

  Review follows in the next posts…

Keyspace

  Review follows in the next posts…

In a chase for latency, MySQL Cluster is the one that can meet our requirements, however …. (should be something) 🙂

MySQL Cluster

  Review follows in the next posts…

Redis, ElasticSearch, Spanner, BlinkDB, God ….

 

Also I was thinking on the following classification:

  • Wide-Area Deployment. Those which are trying to solve wide-range synchronization
  • Short-Area Deployment. This is opposite to the above one.
  • Chase for latency

The main reason for this classification is that my project is not concerned about wide-area deployment.

Jan 292013
 

Designing architecture from scratch may require some tools and approaches to put all the things together.

High-Level structure of software can be illustrated/represented variously, and one of the approaches is to make an Architectural Blueprint with 5 different views to the system [by Philippe Kruchten].

Main views:

  • Logical View – an object model of the design.
  • Process View – concurrency and synchronization aspects.
  • Physical View – mapping of the software to the hardware.
  • Development View – static organization of the software.
  • Use Cases – various usage scenarios.

The Logical Architecture

Serves the functional requirements and decomposes the system into a form of objects and object classes. Class diagrams and class templates are usually used to illustrate this abstraction. Common mechanisms or services are defined in class utilities. Keeping a single, coherent object model across the whole system is a general advice when building a logical view.

Logical

The Process Architecture

It takes into account non-functional requirements, performance and availability, concurrency and distribution, system integrity and fault-tolerance. it can be represented as a high level view to a set of independently executing logical networks of communication programs that are distributed across a set of hardware. A process is a group of tasks that form an executable unit and which can be (a) tactically controlled, (b) replicated, (c) partitioned into a  set of independent tasks: major and minor (cyclic activities, buffering, time-outs).

Process

The Development Architecture

It represents software module organization on the software development environment. It consists of libraries and subsystems representation. The subsystems are organized into the hierarchy of layers with well-defined interface. Overall, this view is represented by module and subsystem diagram, showing the export/import relationships. Also it takes into account internal requirements. Layered style is recommended for this view.

Development

The Physical Architecture

It represents non-functional requirements such as availability, reliability, scalability and performance. It shows how networks, processes, tasks and objects are mapped onto the various nodes.

Physical

Scenarios

This fifth view is redundancy but it has two main purposes:

  • driver to discover the architectural elements during the architecture design
  • validation and illustration role after the architecture design is complete, also it can be used as a starting point for the tests of an architectural prototype.

It uses the components of the logical view with connector’s elements from the Process view for the interaction between the objects.

Summary

Summary

Jan 282013
 

The system should provide a functionality to insert, modify and read the data.

SLA:

  • One instance of the data store should handle 250 000 messages per second, where each message is no more than 200 bytes.
  • A query for a record, with a unique index, should be returned in no more than 10 milliseconds with a 10 billion records database.
  • The capacity of a given response stream should be 100 Mbps.
  • The response streaming capacity of a datastore server should be 300 Mbps (divided across several clients), while receiving 100 000 messages per second.
  • A transaction should be acknowledges on the message bus within 100 microseconds.
  • Fail over within 30 seconds on the same site.

It is important to note that consistency and fault-tolerance is an important part when building distributed systems, however the leverage of the internal NASDAQ OMX messaging system INET provides a build-in support for this functionality. In the further posts (at least I really hope 🙂 lol) I will discover how the load will be handled and which techniques will be used to provide a high availability level and specific faults recovery.