Jul 312013
 

I was thinking about the process of writing the master thesis report. You know, it is not the funniest part of the project but it is still are mandatory to be done.

1. Start with the questions you would like to answer with your thesis. Either it is a well known problem or something new that you just come up with, try to criticize and find the questions that you will be able to answer with your thesis. Obviously, don’t ask questions that are impossible to answer, aka. is there a life in another galaxy?…

2. Write a preliminary conclusions and check whether the questions is answered in the conclusions.

3. Use the before defined questions as a start for the problem definition. After this it should not be  a problem to define and properly describe set of limitations

4. Use you “stuck” time properly. As writing the final report is the greatest pain in the rare part of your body, try to fill the background and related work as soon as you read new papers.

5. Constantly through the whole project rewrite and rethink your abstract. Yes, write it before having anything done. This will help to stay more focused and go directly to the point during the process.

6. Plan you experimental part after related work, background and “kinda ok” system description is ready. Proper selection of experiments is the key to success. Look at the related paper evaluation parts. Be coherent with the comparison data for you project.

7. Talk to your supervisor… I wish I might have done it more often.

8. Use Mendeley or Zotero. It helps so much to keep all your references and notes to the papers in one place and even categorised by the directories. Apart from Zotero, Mendeley allows you as well add all the pdfs you have on the disk so you will always have an access to the full text that could be annotated and stored in Mendeley. So my personal choice is Mendeley.

9. The easiest way to have something innovative done is to either reimplement already existing and improve it or find out missing functionality in something and implement it. No matter what you are doing, the identification of the problem and how/why you decided to work on it should be considered to be included in your introduction.

10. Do all the preliminary presentation properly. Does not matter what you are selling, important is how you do it!

Honestly, I am not sure if I followed all this advices but what I definitely did and what helped me a lot is to procrastinate by writing a report 🙂 Wish you similar procrastination and good luck 😉

May 202013
 

The purpose of this post is to reveal the system organization and properties.

GDSSystemArchitecture2

Figure above shows some concepts of the system design and demonstrate functionality that is covered by the system. The GDS (Genium Data Store) system design can be captured as a set of interactive layers as presented on thefigure. The main idea of this figure is to highlight multilayer organization of the system where each of these layers serve it is own purpose and which are separated between each other. The lowest two level establishes communication between nodes in the system. Nodes are both clients and data stores. Each node, when joining the system, declare its status and add itself to corresponding subscription group. There are several subscription abstraction, among them client, sequencer.

To maintain the total ordering a special subscription group is reserved: sequencer group. Over the messaging middleware a distributed component is places. It support the data replication which guarantee the scalability and availability by means of traffic reduction over the components. On top of replication layer a data store operation layer is placed which (a) support a wide range of operation over data, e.g., insert, update, read, range queries; (b) frame client messages with necessary information needed to access the stores, hence, resolving concurrency conflicts; (c) apply a snapshot mechanism to allow safe range query re-request.

These infrastructure makes it easy to maintain and control the system. Relying on the INET messaging provide a great advantage to prevent all possible inconsistencies and conflicts.

Functionality

The basic functionality provided by the GDS composed from distributed, consistent, fault-tolerant and scalable infrastructure that serve simple requests over data. Among the requests are the following: insert, get, range queries. In order to make a request, the client communicates with storage part through the provided API. Each data store processes only those messages that belong to its partition; therefore, all information about the partitioning is stored on the sequencer to keep track on the number of replicas that serve the data.

With this functionality is it possible to:

  • Store/Retrieve the data
  • Provide consistency, availability, scalability and high performance
  • Leverage the high-performance message bus and in-memory datastore
  • Eliminate a need for highly scalable storage hardware

Data and Query Model

GDS presents a column oriented data store at the first place with the further extension to any data base provider. This made simple, as adding new database schemas and tables into the system are relatively easy and can be plugged by the API for the Data store. Schemas are not flexible: new attributed can not be added at any time but only at creating the table, as the data is stored in a fixed size column fashion.. Moreover, each data must by marked with a timestamp, to speed up further read requests and avoid inconsistencies during the updates. The timestamp for an update is serves as a version, which should be checked before making an update and this way, a timestamp consistency is guaranteed.

The query language of GDS supports selection from a single table. Updates must specify the primary key, similar to PNUTS. Single table queries provide very flexible access during range requests compared to distributed hash or ordered data stores, while still being restrictive compared to relational systems.

Read Query Support

Adaptation of the NoSQL data stores to the relational ones keeps the need for range queries. This functionality is sufficient to further maintain data processing and analysis in offline mode. In the trading environment, support for the time range querying is very important, as further, transactional and analytic processing of data are required. Main use cases are logging, extracting order history, price history, index calculation etc. All these usages dictate the necessity for the range query support.

Moreover, it can be a backbone for an stable way of analyzing the data “on the fly”.

There is an extensive set of works on exploring and evaluating range queries. Among the most common solutions to support range querying is special hash function usage, that preserve locality, different distributed index structures, like trees.

GDS relies on the data locality and timestamp index which is added either by the user or data store automatically. Used data store assures that each record timestamped and therefore, look up can be improved by specifying approximate time range. Data in the store is divided into chunks, each around 100 000 records. Each chunk is indexed according to the timestamp. Records in the chunk is time indexed. This level of separation significantly reduces information lookup time.

It was decided to apply some limitation on the range query response size. Main reason for that is an availability of the system, which could degrade under transmission of unlimited size range responses. The limit is set to maximum L = 10 000 records, which is around 5MB. When the query request is processes the information on the quire size is reported to the client. If the response exceeds L, only the L first records is transmitted to the client. If it is necessary a new additional request can be issued to retrieve missing records.

To guarantee consistency in case of additional request a simple snapshot mechanism is triggered and snipped below. The same procedure is done to guarantee consistency during the failure of TCP connection that transmit the response.

Snapshot mechanism works as follows:

on_range_request:
   send(type = SNAPSHOT, empty message) // Append SNAPSHOT message append to the end of current store
   retrieve(query) // Read the data from the store
   send(response directly to client)
   if (failure || limit_for_response is exceeded)
      retrieve data untill the snapshot point is reached

Snapshot mechanism is only used for the logging use case. Approach from this snippet guarantees that range query response will be equal whenever it is requested. This implies only due to the absence of update operation on the time oriented data schema.

Towards Consistency, Availability and Speed

The design of a system that needs to operate in a production and within strong SLA requirements of NOMX is complex. The system needs to have scalable and robust solution for failure recovery, replica synchronization concurrency and request routing. The servers must be resilient to many kinds of faults ranging from the failure of individual disks, machines or routers. GDS uses active replication, based on the produced by sequencer totally ordered stream of messages, to achieve high availability and a consistent view of the data. Shortly, it produces fully serializable ACID semantic over the data store.

To do so, the following is used:

  • for consistency, reliable totally ordered stream of messages produced by sequencer is used;
  • for availability, a highly robust and fast NOMX message bus is used to support a great number of incoming operations and active replication is implemented to reduce the load from the single replica;
  • for speed, a highly robust and fast NOMX message bus is used.

It is not hard to notice that all, consistency, availability and performance, depend on NOMX message middleware. This subsystem, which various functionality, leverages sustainable behavior of the GDS system, is very critical.

Low Latency

Latency is a critical part of the production oriented system architecture. However, making latency a first order constraint in the architecture is not very common. As the result systems are usually heavily influenced by the failure resilience, availability, consistency problems etc.

The main question here is how to design a system that is oriented towards latency. A few reductions for the system requirements on the aggressive production environment are done:

  • GDS applications does not require wide range deployment
  • Localized disasters are not taken into account, however it could be adjusted be adding site replication

Here are the following steps on the way to the speed:

  • Lightweight Sequencer. The sequencer in the system has a limited functionality and his main functions reduced to assigning a sequence number to messages and forwarding them to all subscribers. Moreover, sequencer completely isolated from the incoming message content; however, it can add additional information to the message, such as, sequenced number, other user information.
  • Good Decomposition. Decomposition of the application is very important during the design of any distributed application. GDS exposes relatively decent decoupling in the system with several levels and components. The roles in the system are sequencer, clients, data stores. All of them replicated and easily replaceable. Moreover, a layer of abstraction is placed under both clients and data stores, which manages registration, communication with sequencer and makes it transparent for both clients and stores.
  • Asynchronous Interactions. All interaction in the system is based on a well-known event-driven paradigm and rely on the asynchronous communication using UDP. The underlying messaging system, that uses MoldUDP, made the communication reliable. Moreover, if the necessity to rely on synchronous API appears, it is very easy to maintain it from the asynchronous API.
  • Non Monolithic Data. The whole system is supposed to be stored in the column oriented storage and partitioned both by range and hash for different data sets, respectively. This gives the effect of highly decomposed data without any need to perform join, which are not supported by the system.
  • Low Latency Reliable Totally Ordered Message Bus. To improve the performance a highly scalable and fast NOMX messaging middleware was leveraged in many ways.
  • Effective programming techniques. Following the advises from the [Effective C++, Java], GDS was build to reduce all possible overheads from the initialization, communication, garbage collection.

Summary

GDS ia a unique distributed system build on top of the reliable total order multicast messaging middleware developed in-house by NOMX. It is build to serve a large amount of requests per second and perform it fast, with consistency, fault-tolerance and availability in mind. Moreover, it is supplemented with a performance of the NOMX messaging system.

A wide set of operation is supported over the data, such as insert, read, range query, update. Moreover this set is spread over two different data sets: immutable log and mutable object records, which are actively replication by the total order stream of messages from the sequencer. Over the immutable data two types of operation are supported: insert and range query. Mutable data supports three operations: insert, update and get. First subset is made reliable by the extra fault-resilient, e.g., link failure. Second subset provides resolution for the concurrent updates, e.g., timestamp consistency. Depending of the data type, the data is partitioned either by range or hash, respectively, to guarantee the maximum performance of the subset operation.

Further chapters describe the architecture of the system and show the proof of concept for performance, scalability and failure resilience properties of the prototype system.

 

 

Mar 042013
 

It is quite a preliminary version of the problem description, i.e. motivation.

Again, any comments are more than welcome 🙂

Problem description

There are many existing distributed systems (DS) which are focused on optimization of the various systems properties, e.g. availability, robustness, consistency. Designing of a distributed data storage and data processing system for real time stock exchange environment is quite challenging and should meet strict SLA requirements. Current general purpose solutions are eager to sacrifice some properties in order to achieve great improvements in the other ones. Moreover, none of them leverages a uniform reliable total order multicast properties [] to supply fault-tolerant and ACID properties for the data operations. (Here a few paragraphs with some basic classification of DSS and their solution focus).

However, despite algorithmic advancements in total order broadcast and the developments of distributed database replication techniques based on it, limited research on applying these algorithms for large-scale data storage and data processing systems exists. (Here are a few sentences about total order algorithms and its application). Limited application in the real-time large-scale systems might be due to the previous scalability issues of the messaging systems, which was limited to the messaging bus capacity.

We are proposing a system, based on the NASDAQ OMX low latency uniform reliable totally ordered message bus, which is highly scalable, as the capacity of the message bus exceeds 2 million messages per second, available, and consistent. This messaging abstraction interprets unordered incoming stream of data into an ordered sequence of operation which are backed up by rewinders and therefore message gap-filling mechanism automatically supported and served by them. An ordered stream of data is published on the, so called, “message stream” and is seen by everyone on the stream. Based on this message bus, optimistic delivery can be assumed. In other words, an early indication of the estimated uniform total order is preserved and it is guaranteed to commit eventually all messages in the same order to all subscribed servers.

The main focus of this work is the leverage of reliable total order multicast protocol for building real time, fault-tolerant, ACID and low-latency distributed data store system. The major difficulty is to be able to guarantee fault-tolerance, availability for the system and ACID properties for the data operations. Moreover, supporting system in real time is challenging and maintaining distributed read queries and concurrent updates is no straightforward endeavor. To reach the performance goals, the following approach is applied:

  • Scalability: Adding extra instances on the stream is very easy. Therefore, the only thing that is required is to declare schemas and tables that are served by the data store.
  • Availability: Ability to serve request at any given time is provided for both simple operations and queries. First, capacity of the message bus can handle simple operations without extra tweaks. Second, read queries responses are sent directly to the requester and are served by the fastest data replica.
  • Consistency: As the underlying message passing abstraction produces a uniform reliable totally ordered stream of requests, each instance sees exactly the same sequence of messages. This gives a consistent view by any instance at any request time. Similarly for concurrent updates, totally ordered timestamps per update are used, hence timestamp concurrency control [] is deployed.
  • Fault-Resilience: As absolutely equal stream of requests are received by any of the replica, this way, failure of any instance during simple operations is not important. Failure of the data store during the query serving is handled by the simple snapshot indication message on the message stream. This way the query can be requested again from the fracture place.
  • Read Query Support: In order to increase the availability level, limitation on the query response is set. If the extension of the response is required, the query should be submitted again.

 

Mar 022013
 

I think it is kind of time to start working on the report draft 🙂

Here is first version of an abstract for my project report. Any commects are more that welcome!

Abstract

In recent years the need for distributed, fault-tolerant, ACID and low latency data storage and data processing systems has led the way for new systems in the area of distributed systems. The growth of unbounded streams of data and the need to process them with low latency are some of the reasons for such interest in this area. At the same time, it was discussed that a total order algorithms is a fundamental building block in construction of a distributed fault-tolerant applications.

In this work, we are leveraging NASDAQ OMX low-latency uniform reliable totally ordered message bus with a capacity of 2 million messages per second. The ACID properties of the data operations are easily implemented using the messaging bus as it forwards all transactions in reliable total order fashion. Moreover, relying on the reliable totally ordered messaging, active replication support for fault handling and load balancing is integrated. Consequently, the prototype was developed using requirements from a production environment to demonstrate its feasibility.

Experimental results show that around 250 000 operations per second can be served with 100 microseconds latency. Queries response capacity is 100 Mbps. It was concluded that uniform totally ordered sequenced input data can be used in real time for large-scale distributed data storage and processing systems to provide availability, consistency and high performance.

Feb 202013
 

I think it is a  good idea to keep things not only in mind but also here, so I could come back to it.

I decided to start from the todo list for my thesis project:

  • Finish simple prototype with simple functionality (append)  — 26.02 I hope
  • Test 🙂  — 28.02
  • Add some failure handling  + Finalize Update, Delete– 01.03
  • Test some failure scenarious 🙂  — 03.03
  • Add more complex features (query)  — 10.03
  • Test 🙂  — 13.03
  • Add data store — 15.03
  • Add failure resistence functionality  —  20.03
  • Test Test Test 🙂 + Test failure scenarious 🙂   —  22.03
  • Write… … …    —   25.03
  • Think on streaming application  —  26.03
  • Do a simple prototype of aggregation application over a stream of data  — 31.03
  • Test 🙂  —  03.04
  • Do some bulshit and write an application to visualize messages in the system 🙂  —  10.04
  • Feel happy about the visualization part  —  14.04
  • Write Write Write…

For now, these dates look a bit too optimistic, but I hope that it will fluctuate no more that 1 week.

Gonna be a tough month 🙂