react

Those who worked in UI, understands how tricky it is to manipulate and maintain DOM. When underlying data changes, the complexity of re-rendering DOM grows exponentially and performance deteriorates.

That’s exactly what react solves for us.

Let’s talk about how react solves this puzzle.

  1. React is a facebook‘s UI library.
  2. We build react components using react libraries. Check this HelloWorld component. The component can have state (variables).
  3. The render method of component returns virtual dom.
    React_Virtual_DOM
  4. When component’s state changes (values of the variables), render method regenerates virtual dom.
  5. React compares (diffs) this new and original virtual dom and applies the minimal changes to real DOM. This is the piece which makes react so performant.

You only think about component’s state at any point of time and react takes care of rendering it with unmatched performance.

There are lot other tiny details and questions you would have to find answer for, like; are react components same as GWT, extJS components?How to compose react components? encapsulate component? What about events? Is virtual and shadow DOM same? what is Props and State? Why react? can react be used in other mvc framework? does it have its own MVC? what is flux? What is the impact on SEO? Can I build something complex with react? what about unit tests? How different it is than angularJS? and lot more………

…….but in a nutshell this is what react is and it’s awesome. I personally fully engrossed in it. Next I will talk about react-flux, stay tuned.

Advertisements

Terms I struggle while learning Javascript frameworks

As I mentioned in my last post I came from backend development experience and so when I started learning JQuery, react or angularJS many keywords bombarded on me…node, nodejs, AMD, commonJS, ES6, Polyfill, Shimp etc.

It took me a while to understand these terms in context. I tried to provide brief description for most important of them.

Node: Node is an interface to the V8 JavaScript runtime – the JavaScript interpreter that runs in the Chrome browser.

Javascript is a programming language so you can use it anywhere even on server. Node allows us to use Javascript at serverside.

Node.js: Node.js is a web server(apache,NGINX) running on Node. There are loads of other javascript library which you can use to add spice to Node.js like payment, OAuth2 etc.

JavaScript does not have built-in support for modules, but community have created workarounds, AMD and commonJS.

AMD, commonJS: are the de-facto specification for how modules and their dependencies should be declared in javascript applications.

commonJS: is mostly suitable for node and not for browser. The node.js and RingoJS implement this specification

AMD: has kept browser first in mind. The requireJS or curlJS implement this specification.

ES6: JavaScript is built upon ECMAScript. Languages such as ActionScript, JavaScript, JScript all use ECMAScript as its core. Most browsers today supports ES5. The ES6 specification is a sixth major release of the ECMAScript language specification. Most of today’s browser does not support it yet but community helped here, Transpiler.

Transpilation: A transpiler is a type of compiler that takes the source code of a programming language as its input and outputs the source code of another programming language. So for ES6 to ES5 there are transpilers available like tracuer, babel.

Shim: A shim is a code that brings a new API to an older environment, using only the means of that environment. It intercepts calls and provides different behaviour. The term is not limited to Javascript or web-applications.  DateTime.Now will be intercepted and converted to new DateTime

Polyfill: is a type of shim. It provides missing functionality which you expect your browsers to have. It detects the missing API and provides its own. e.g. prototype methods added to Array (filter, map) in ES5, fetch api

gulp, grunt, browserify,webpack, JSPM: Consider them like your ant (task based) like build tools and pom file for dependency management. There is subtle difference between first two and rest but at high level and in combination that’s what they do.

NPM: is a package manager for javascript. consider it as your maven central repository.

migration: backend to UI developer

I am writing this blog post to clear off some myths about UI development and provide more clarity to those backend developers who are little reluctant to work for UI.

After spending long career as a backend developer, the 100% UI assignment came to my plate. Its not that I never done UI work but most of the time it was temporary or patch work. This is the first time I had been hearing terms like ‘Hoisting, closures, Linters, call, apply, bind, monads etc. Most of them were alien to me…. 😦

At first I was resistant to dive into UI and nervous about overall situation.

Though I soon realized that type of work was different…. It was not about designing screens and spending nights debugging those css styles to get alignment correct among browsers. It was more about building full blown web application using Javascript. I was still nervous because that is not my core area and was going to attract lot of personal time. Yeh! no matter how much time you have spent in an Industry, new things does demand lot of time understanding those concept in depth and more if you are going to be part of core team and building something from scratch.

So I started searching for random piece of advice all over internet to get quickly on top of Javascript. I wrote some serious javascript, css, flex, gwt and extJS code for production before but I never paid enough attention to learn language fundamentals. The past was more about using framework than building one.

I went through lot of random pieces and finally settle down with following list. I would highly recommend to get these concept clear if you want to do some serious coding in javascript and even if you are UI developer you can assert your understanding.

JavaScript stuff to know:

  • Closures
  • Execution Context and scope chain
  • Hoisting
  • Null vs undefined vs undeclared variables
  • ‘Use strict’ mode
  • Linters: jslint, jshint, eslint
  • The functions: call, apply, bind
  • CSS selector hierarchy
  • currying and monads
  • mixins
  • build tools (grunt, gulp or webpack)
  • es5 vs es6
  • Go through this awesome video ‘JavaScript: Understanding the Weird Parts’: https://www.youtube.com/watch?v=Bv_5Zv5c-Ts
  • Books: Javascript Ninja

I am by no means expert on UI technologies and no way claiming this as a complete list but these are must to know things and idea is to click some questions in your mind. If I have missed some important aspect then please post them in comments and I will get it added to the list.

One more advice, be a polyglot programmer. Knowing many programming languages and paradigm (functional vs imperative, static vs dynamic typed, jvm vs non-jvm etc) will help us understand new concepts quickly and it will make us invaluable engineer.

I will be talking about MVC, V, angular vs react, react-flux, redux, build tools etc in coming blogs so stay tune.

Increase throughput with async and non-blocking calls

If you really want to hear about it, the first thing on work day morning you’ll probably want to know how your last day’s work performed. Well I was lucky this morning hearing a big applause for winning yet another hackathon at client location. In hackathon you build something useful in 8-24 hrs . Facebook’s like button was built in one of the hackathon.

I chose a topic to showcase “how a right choice of technology and programming style can increase the application throughput exponentially“. First question came to my mind was how would I demonstrate that something I built is better, I realize that I need something for comparison. I should build something using traditional way, measure the throughput and then build exactly same thing the I  way I think is better and compare their results. That’s it, My instinct told me this is the way to go 🙂

I built an application to mock OAUTH2.0 response of Facebook. I kept it very simple, one validation to check if access token exist in a request and then flush the dummy response. The source code is here on github.

I then built a same functionality using “play & scala”, get source code here. I chose “play” because it uses Netty- non blocking server at backend which I thought will boost my throughput further along with style of programming (async & non-blocking) I want to showcase. “play” also allows hot-swapping (changing & testing code  without restart). And “scala” because it is easy to write concise and asynchronous non-blocking code with it and I thought it will also shed some light on practical explanation of my previous blog- multicore crisis.

I then wrote JMeter test script to put a load on both applications. Of this script I first ran “facebook_imperative” test which puts load on traditional web application deployed on tomcat which has 300 threads configured in server.xml. I used 400 JMeter threads to shoot up concurrent request for one second. Following is the result of test. As expected application throughput is around 544 request per second with 3+% error rate. The errors are because tomcat could not handle these many requests and some of them got timeout. The error is obviously something which I would want to avoid because it compromises application reliability.

traditional_sync_test

traditional_sync_test

I wrote exactly same functionality in play & scala and ran the similar test “facebookmock_sync” of above script. The execution context for this was also using exactly 300 threads (check FacebookMock.facebookSync & config.ConfigBlocking.executionContext). The JMeter used 400 threads to shoot up requests.

play_scala_sync_test

play_scala_sync_test

Now compare the results. There are no errors in “play & scala” app and the throughput is 6000+ requests per second . That means 12 times increased throughput. If I can take some liberty to attach some meaning to it then it means I can serve 12 times more customers with same infrastructure and with much quicker response time (look at Average). It also means I need much smaller tech ops teams and I will have more happy customers with lesser expense/investment.

I decided to take this further and thought of adding some real simulation. Not all our calls are non-blocking like this mock response. Some of them may be blocking calls e.g. JDBC calls. I simulated this by putting delay of 1 second using Thread.sleep before responding. That’s the way we usually write code just block the thread and wait for the result:( .

I ran the blocking test “facebookmock_blocking_Imperative” on traditional application. For this I have same application deployed on tomcat but now “doGet” method was using blocking method of mock response. The similar test “facebookmock_async” ran on play app.

traditional_blocking_test

traditional_blocking_test

play_blocking_test

play_blocking_test

As you can see in above figures, the error rate(18%) in traditional app is unacceptable. So throughput shown in figure ‘traditional_blocking_test’ is false. Where as  there are still no errors in play app. The difference in coding is; I released the server thread as early as I can using Future & Promises. In fact I tried blocking test even with 1000 JMeter threads and traditional apps gives 50% error rate whereas play app still survives with 0% error.

The best part is here. I converted the blocking call into non-blocking using Scheduler. I used only 1 thread of a pool this time. Check Scheduler.scala & Config.scala. And wow! play app still works with just one thread and even little better results. That’s really awesome!

play_non-blocking_test

play_non-blocking_test

I always believed that programming is an art and one should handle it with extreme delicacy & respect to craft out marvelous peace which you want world to admire. This hackathon provided me one more opportunity to follow and showcase it.

Multicore crisis and Functional Programming

On one fine morning of last November I was reading this article “The Free Lunch Is Over” on the coffee table. When trying to the understand article, I found myself regretting that I never took “Functional Programming” seriously until then. I realized that all these days I was writing sequential programs and never tried to parallelize them. I was so much focused on memory utilization that I had almost always taken CPU performance granted and had left it to chip designers to make my program run faster.

As shown in following graph which is taken from the Herb Sutter’s above article, the CPU cycles per sec (dark blue) are not increasing exponentially as was the case till year 2005. Moore’s law may be true for exponential growth of transistors but CPU cycle are becoming linear. That means number of instructions processed by one CPU per cycle have now hit the wall. Instead; chip designers are adding more cores, on chip cache, hyperthreading and read/write optimization to support more processing demand.

moores law

However the way most of us write programs keep these extra cores either idle or consumed for running spyware and malware 🙂 Have a look at below CPU utilization of my quad core processor while my JVM is up. We can see that only one core is being used and that too hardly to its 50% of capacity 😦 which in total, for 8 cores counts down to  less than 10%

8 unused cores

In fact, addition of each core is going to make our imperative style programs slower due to heat and power reasons. It’s going to be hard to explain to the client that despite addition of more cores to the machine our programs are not running faster and utilizing only 20% of the capacity.

Well never the less! That’s how I got pulled towards functional programming and spent sleepless nights in last three months understanding its core concepts, linking knots together and presenting topic to client team to make sure that I understand it to the level to convince people and it paid off, it well paid off.

I used various dimensions explained by experts to understand where each programming languages stand. You can add other dimensions to below table like “honesty about side effects, commercial value, popularity etc to include C#, F#, C++, Haskell, Erlang in the game.

Java Scala JRuby Clojure
Typing Static Static Dynamic Dynamic
Paradigm OO OO/FP OO FP

I choose “Scala” as a language to understand core functional programming concepts because “Scala” is like radio dial which has OO at its one end and FP on the other end. You can tune to the level you are comfortable with and keep adjusting the dial as you learn more; previous work experience in “Groovy” certainly helped me to understand concepts quickly.

The first and foremost benefit I observed by using functional language was its honesty about side effects. I then admire its following features

  1. Concise code. Imagine how much code we would write in C# or Java to achieve the same
    1. val someNumbers = List(1, 2, 3, 4, 5, 6, 7, 10, 34, 46, 75, 100)
    2. val onlyEven = someNumbers filter( _ % 2  ==  0 )
    3. val onlyOdd = someNumbers filter( _ % 2  !=  0 )
    4. val onlyMoreThan25 = someNumbers filter( _ > 25 )
  2. Function are first class citizen, are higher order, closures, partial function and Currying
    1. def f(x: Int) = x * 2
    2. def g(y: Int) = y + 2
    3. you can compose functions like f(g(2)) which would give the result 8
  3. TypeInference
    1. Map[Integer, String] employee = new HashMap<Integer, String>() This is in java. Didn’t we tell the type in first part of the above line?
    2. val capital = Map(“US” -> “Washington”, “France” -> “Paris”) Scala is statically typed but infers the type wherever it can which mean you ‘type’ less 🙂
  4. LazyEvaluation
  5. Control Abstraction
  6. Pattern Matching and Extractors
  7. XML
  8. Traits
  9. AKKA & Concurrency
  10. Modular Programming
  11. Tailcall Optimization
  12. Parallel Collections

Each one of them actually deserves their own blog hence I could not provide examples of for all of them.

To just prove my understanding I took one of my old Java code in which I was processing incoming compressed file which in turn contains multiple files. I was processing these files inside “for loop” like below which was taking 7.5 sec to complete and only one out of the 8 cores was being used because of sequential style of processing.

for(File currentFile: UncompressedFiles)

process(uncompressedFile);

Then I wrote the same processing using, AKKA actor system. I created the “8” actors to process one file each. The below graph explains that all 8 cores got utilized and processing was completed within 2.5 sec. I was quite astounded with the results and I will certainly showcase it to the client.

8 used cores

val m = system.actorOf(Props[WordCountMaster], name=”master”)

m ! StartCounting(“src/main/resources/”, 8)

Right now, I plan to set the book aside, leaving it on my coffee table to see where in daily work I can use it to ease the development efforts, write less code, and improve performance. If it works out, then I may return and study it further to pick up from where I am now. I’m so happy to have this insights that certainly made me better programmer. How do you find this topic? Leave the comments and I will see what I can put more lights on.

Hadoop and MapReduce

Hadoop has catch my attention recently when I was looking for a BI solution which can tell me application usage and trends through various angles over the years. It took me while to understand what exactly Hadoop is, how MapReduce complements it and how together they can help me in resolving problem of finding trends through unstructured and huge log files. I thought of putting this learning in simple terms to help others to get it with quickly. I have also presented this topic; recording is available at

http://www.anymeeting.com/tushar/EA55DC838847

What is Hadoop: It is a framework which allows processing of large data sets across cluster of computers (commodity hardware).

Hadoop includes three major sub-projects

Hadoop Common: It is a set of utilities to support Hadoop subprojects. It includes serialization, RPC and filesystem.

HDFS: It is a scalable, fault tolerant, high performance distributed filesystem

Hadoop MapReduce: Its a programming mode which supports parallel processing of large datasets

Hadoop Architecture:

Client data gets written at multiple datanodes as directed by masternode.

We specify the block size in Hadoop configuration file. The client data gets split into blocks  of this size and distributed across the cluster of datanodes. We also specify the replication level. The block will be replicated at these number of datanodes to support failover.

HDFS itself does not claim faster lookup/access. It is for storing large files.  The datastore systems built on top of it like Hbase stores the indexes of the files stored onto HDFS to search the data quickly. This is point of confusion

What is MapReduce: It is a simple programming model for processing highly distributed  datasets using a large number of computers (nodes). The datasets can be filesystem (unstructured) or database (structured).

Map: It solves small subset of problem and pass the result to master node. The solution to the problem is same for every element of input parameter. The output is always a new list.

e.g. if you have list(2,4,5,6……n) and want square of each element, then map is Map(Square(2,4,5)) = (4,16,25). The important points is that function “square” can be applied to each element in the list. Hence you can pass the function square itself to computing node where it can be applied to small subset of entire list. The computing node already contains subset of i/p list (block) which is spread across the cluster of datanodes.

Reduce: This is a combiner. It collects o/p of each Map and reduces/combines it to provide the desired o/p. e.g. if you want a sum of all square of list (2,4,5…n) then multiple Maps  completes squaring of sublist and pass the result to reducer. The reducer then append all Maps o/p and applies function sum to calculate desired o/p.

MapReduce in Hadoop

JobTracker (Master)

Split i/p and assigns to various map tasks
Schedules and monitors map tasks (heartbeat)
On completion schedules and reduce tasks
If any of datanode fails then it reschedules those tasks for re-execution where ever its replication block is available in a cluster

TaskTracker (Slave)

Execute map and reduce tasks
Handle partitioning of map o/p
Handle sorting and grouping of reducer i/p
WordCount example: This is a “hello world” program to explain MapReduce.
Suppose you have a text file “The quick brown fox. The fox ate the mouse. How now brown cow?” and you want to know how many times each word has repeated within a file.
The Hadoop configuration (MasterNode) will split this i/p file. Say it split it as three blocks spread onto three datanodes (computers).
1. the quick brown fox
2. the fox ate the mouse
3. how now brown cow
As shown in above figure the jobtracker, will ask three tasktrackers to run the map on these three blocks on respective datanode.
Please note the computing is happening where the data resides, this helps the faster processing. Its also possible that all i/p blocks available on single datanode and multiple maps runs in parallel on that datanode.
The JobTracker divided the big one task into three smaller tasks which can be completed quickly by running them in parallel.
So Map counts the each word and produces a new list [{the,1} {quick,1},{the,1}] etc. The “shuffle and sort” phase then partition the o/p of each map using hashing mechanism so that same word (‘the’) fall under same partition. The reducer then process these partition to count occurances of each word [{the,2}, {quick,1}] etc. The Reducer waits till all Map finishes their tasks and partition completes.

Map Phase:

Map tasks run in parallel

Shuffle and sort phase:

Map task o/p is partitioned by hashing o/p key
Number of partitions are equal to number of reducers
Partitioning ensures all key/value pairs sharing same key belong to same partition
The map partition is sorted by key to group all values for the same key

Reduce Phase:

Each partition is assigned to one reducer
Reducer also runs in parallel
No two reducers process the same intermediate key
Reducer gets all values for a given key at the same time

Advantage

Locality
Parallelism
Fault tolerance
Hadoop-streaming utility allows you to create and run map/reduce jobs with any exe /script as the mapper/reducer
Uses of Hadoop:
Building search index at Google, Amazon
Widely used for analyzing user logs, data warehousing and analytics
Used for large scale machine learning and data mining applications
Legacy data processing where it requires massive computational

How did Hadoop help solving my problem?

The following ecosystem diagram explains how the Hadoop and its other subprojects has helped me to solve my problem of analyzing huge log files.

How to find and fix memory leaks

We learned about approach of probing memory leaks in my previous blog, now its interesting to see how we can actually find and fix them.

If you have taken heap dumps of production environment then load them into profiler and find out the biggest object by retained size. The top 3/4  object are the first suspects to be held in palisade. Look if the considerable size of used memory is being held up in those objects, if they do then these are objects you should start trial upon. But don’t trust just single heap dump, compare multiple heap dumps and prepare a list of suspicious objects. I would also advise to set up a quick test with very high load and obtain heap dump to see  if the top suspects matches. You should choose those objects whose GC roots can be established; others are unreachable and will be collected by garbage collector anyway; hence they don’t usually eating up your memory.

Most of the time these object could be of JDK library as indicated in following diagram, which doesn’t point us directly into an application code at first glance. Its important to understand how to link these JDK classes to your application. Most of the profilers help us here.

In “yourkit” if you select the “Obejct Explorer” view in bottom tabs then it shows individual instance of these class and their size as shown in below diagram. If the object you are browsing is of collection type then you can explode the individual instance to see what objects it is storing internally.

Remember these JDK classes themselves are not a problem, the real culprit is buried within your code which is creating these objects. To find out the link to your code; right click onto instance shown in “Object Explorer” tab and select “path to GC roots” (yourkit) or “Immediate Dominator” (MAT). It will take you to the class which is spawning these instances as shown below. Look if there is something wrong in this piece of code. Is it creating too many instances? Is it leaving unclosed/unreleased resources?

If you let such application running for prolong period you would see that there is constant gap built up between available and used memory. At certain point even garbage collector can not free up enough memory and it throws “GC overheadLimit” error. You can disable this warning using jvm parameter however that would just mean postponing more serious problem which will hit you back with “OutifMemory” error.

If most of the memory is retained into Finalizer object as shown below, then we have another problem. I would strongly recommend you read Effective Java-Item 6: Avoid finalizers and see if you really need a finalize method you are using. I bet you will  convinced not to use finalize method compared to its cost and find an alternative.

Sometimes changing GC policy from parallel (“-XX:+UseParallelGC) to concurrent (XX:+UseConcMarkSweepGC) also helps to improve the situation. You can also specify the proportion of VMs time (-XX:+UseGCOverheadLimit) which will be put into GC before “overheadLimit” error is thrown. However as thumb rule I always suggest using JVM parameters as last resort.

Increasing JVM heap size (-xmx512M) could also improve this situation. Since your application will have more memory available for objects to reside in, it can sustain longer. Meanwhile Finalizer thread will remove objects from reference queue which will allow these objects to be garbage collected. This improvement depends upon how much and fast you create objects of the classes (having finalize method). Over utilization will obviously will not allow this solution win over the Finalizer problem. There is again a limit of how much memory you can allocate to JVM. The 32 bit JVM process can address maximum of 4GB space; out of which 2GB is reserved for windows kernel. Out of remaining 2GB you will need some for permGen space to accommodate your class definitions, method code etc; some space will required for native threads and remaining you can use for Heap space. The 64 bit JVM process can be effective in this situation. However even if you are able to resolve the Finalizer problem using one of these technique  , I would still recommend replacing finalize method with alternative design.

The next area we should look at is if we have any JAVA level thread deadlocks. Usually the prolong period soak test detects thread deadlocks and profiles are smart enough to point us them with thread stack trace. Even JConsole has this functionality. Now study this thread stack carefully and it should point you to area within application code which has yield this deadlock. It could be because of using static variable like an instance variable or requiring the resource (connection/statement/socket) held by threads and waiting for each other. The best pointer will be your thread stack.

Your application could also become very slow because of thread contention. Look at below diagram. It shows that almost all threads are waiting to get an instance from singleton class’s method. Do you really require singleton here? re-visit you design, see if you can create an instance at declaration itself, it will not require synchronize keyword on “getInstance” method and avoid thread contention.

Look out status of threads in multiple thread dumps. If you find any single thread in always “runnable” state check stack trace to figure it out if you have forgotten to put exit condition in a loop or something.

I think I should stop it here for now as I started feeling hungry n should grab something to eat…