Wednesday, January 20, 2016

Dump RPC stats with JGroups

When using remote procedure calls (RPCs) across a cluster using RpcDispatcher, it would be interesting to know how many RPCs of which type (unicast, multicast, anycast) are invoked by whom to whom.

I added this feature in 3.6.8-SNAPSHOT [1]. The documentation is here: [2].

As a summary, since this feature is costly, it has to be enabled with
probe.sh rpcs-enable-details (and disabled with rpcs-disable-details).

From now on, invocation times of synchronous (blocking) RPCs will be recorded (async RPCs will be ignored).

RPC stats can be dumped with probe.sh rpcs-details:
[belasmac] /Users/bela/JGroups$ probe.sh rpcs rpcs-details

-- sending probe on /224.0.75.75:7500
#1 (481 bytes):
local_addr=C [ip=127.0.0.1:55535, version=3.6.8-SNAPSHOT, cluster=uperf, 4 mbr(s)]
uperf: sync  multicast RPCs=0
uperf: async unicast   RPCs=0
uperf: async multicast RPCs=0
uperf: sync  anycast   RPCs=67480
uperf: async anycast   RPCs=0
uperf: sync  unicast   RPCs=189064
rpcs-details=
D: async: 0, sync: 130434, min/max/avg (ms): 0.13/924.88/2.613
A: async: 0, sync: 130243, min/max/avg (ms): 0.11/926.35/2.541
B: async: 0, sync: 63346, min/max/avg (ms): 0.14/73.94/2.221

#2 (547 bytes):
local_addr=A [ip=127.0.0.1:65387, version=3.6.8-SNAPSHOT, cluster=uperf, 4 mbr(s)]
uperf: sync  multicast RPCs=5
uperf: async unicast   RPCs=0
uperf: async multicast RPCs=0
uperf: sync  anycast   RPCs=67528
uperf: async anycast   RPCs=0
uperf: sync  unicast   RPCs=189200
rpcs-details=
<all>: async: 0, sync: 5, min/max/avg (ms): 2.11/9255.10/4917.072
C: async: 0, sync: 130387, min/max/avg (ms): 0.13/929.71/2.467
D: async: 0, sync: 63340, min/max/avg (ms): 0.13/63.74/2.469
B: async: 0, sync: 130529, min/max/avg (ms): 0.13/929.71/2.328

#3 (481 bytes):
local_addr=B [ip=127.0.0.1:51000, version=3.6.8-SNAPSHOT, cluster=uperf, 4 mbr(s)]
uperf: sync  multicast RPCs=0
uperf: async unicast   RPCs=0
uperf: async multicast RPCs=0
uperf: sync  anycast   RPCs=67255
uperf: async anycast   RPCs=0
uperf: sync  unicast   RPCs=189494
rpcs-details=
C: async: 0, sync: 130616, min/max/avg (ms): 0.13/863.93/2.494
A: async: 0, sync: 63210, min/max/avg (ms): 0.14/54.35/2.066
D: async: 0, sync: 130177, min/max/avg (ms): 0.13/863.93/2.569

#4 (482 bytes):
local_addr=D [ip=127.0.0.1:54944, version=3.6.8-SNAPSHOT, cluster=uperf, 4 mbr(s)]
uperf: sync  multicast RPCs=0
uperf: async unicast   RPCs=0
uperf: async multicast RPCs=0
uperf: sync  anycast   RPCs=67293
uperf: async anycast   RPCs=0
uperf: sync  unicast   RPCs=189353
rpcs-details=
C: async: 0, sync: 63172, min/max/avg (ms): 0.13/860.72/2.399
A: async: 0, sync: 130342, min/max/avg (ms): 0.13/862.22/2.338
B: async: 0, sync: 130424, min/max/avg (ms): 0.13/866.39/2.350

This shows the stats for each member in a given cluster, e.g. number of unicast, multicast and anycast RPCs, per target destination, plus min/max and average invocation times for sync RPCs per target as well.

Probe just become even more powerful! :-)
Enjoy!

[1] https://issues.jboss.org/browse/JGRP-2005
[2] Documentation: http://www.jgroups.org/manual/index.html#_looking_at_details_of_rpcs_with_probe

Monday, January 18, 2016

JGroups workshop in Munich April 4-8 2016


I'm happy to announce another JGroups workshop in Munich April 4-8 2016 !

The registration is now open at [2].

The agenda is at [3] and includes an overview of the basic API, building blocks, advanced topics and an in-depth look at the most frequently used protocols, plus some admin (debugging, tracing,diagnosis) stuff.

We'll be doing some hands-on demos, looking at code and I'm always trying to make the workshops as hands-on as possible.

I'll be teaching the workshop myself, and I'm looking forward to meeting some of you and having beers in Munich downtown! For attendee feedback on courses last year check out [1].

Note that the exact location in Munich has not yet been picked, I'll update the registration and send out an email to already registered attendees once this is the case (by the end of January the latest).

The course has a min limit of 5 and a max limit of 15 attendees.

I'm planning to do another course in Boston or New York in the fall of 2016, but plans have not yet finalized.

Cheers, and I hope to see many of you in Munich!
Bela Ban


[1] http://www.jgroups.org/workshops.html
[2] http://www.amiando.com/WorkshopMunich
[3] https://github.com/belaban/workshop/blob/master/slides/toc.adoc

Tuesday, January 12, 2016

JGroups 3.6.7.Final released

I'm happy to announce that 3.6.7.Final has been released!

This release contains a few bug fixes, but is mainly about optimizations reducing memory consumption and allocation rates.
The other optimization was in TCP_NIO2, which is now as fast as TCP. It is slated to become the successor to TCP, as it uses fewer threads and since it's built on NIO, should be much more scalable.

3.6.7.Final can be downloaded from SourceForge [1] or used via maven (groupId=org.jgroups / artifactId=jgroups, version=3.6.7.Final).

Below is a list of the major issues resolved.
Enjoy!


[1] https://sourceforge.net/projects/javagroups/files/JGroups/3.6.7.Final/


New features


Interoperability between TCP and TCP_NIO2


[https://issues.jboss.org/browse/JGRP-1952]
This allows nodes that have TCP as transport to talk to nodes that have TCP_NIO2 as transport, and vice versa.

Optimizations


Transport: reuse of receive buffers

[https://issues.jboss.org/browse/JGRP-1998]
On a message reception, the transport would create a new buffer in TCP and TCP_NIO2 (not in UDP), read the message into that buffer and then pass it to the one of thread pools, copying single messages (not batches).
This was changed to reusing the same buffers in UDP, TCP and TCP_NIO2, by reading the network data into one of those buffers, de-serializing the message (or message batch) and then passing it to one of the thread pools.
The effect is a much lower memory allocation rate.

Message bundling: reuse of send buffers

[https://issues.jboss.org/browse/JGRP-1989]
When sending messages, a new buffer would be created for marshalling for every message (or message bundle). This was changed to reuse the same buffer for all messages or message bundles.
The effect is a smaller memory allocation rate on the send path.

TCP_NIO2: copy on-demand when sending messages

[https://issues.jboss.org/browse/JGRP-1991]
If a message sent by TCP_NIO2 cannot be put entirely into the network buffer of the OS, then the remainder of that message is copied. This is needed to implement reusing of send buffers, see JGRP-1989 above.

TCP_NIO2: single selector slows down writes and reads

[https://issues.jboss.org/browse/JGRP-1999]
This transport used to have a single selector, processing both writes and reads in the same thread. Writes are not expensive, but reads can be, as de-serialization adds up.
We now have a reader thread for every NioConnection which processes reads (using work stealing) separate from the selector thread. When idle for some time, the reader thread terminates and a new thread is created on subsequent data available to be read.
UPerf (4 nodes) showed a perf increase from 15'000 msgs/sec/node to 24'000. TCP_NIO2's speed is now roughly the same as TCP.

Headers: collapse 2 arrays into 1

[https://issues.jboss.org/browse/JGRP-1990]
A Message had a Headers instance which had an array for header IDs and another one for the actual headers. These 2 arrays were collapsed into a single array and Headers is not a separate class anymore, but the array is managed directly inside Message.
This reduces the memory needed for a message by ca 22 bytes!

RpcDispatcher: removal of unneeded field in a request

[https://issues.jboss.org/browse/JGRP-2001]
The request-id was carried in both the Request (UnicastRequest or MulticastRequest) and the header, which is duplicate and a waste. Removed from Request and also removed rsp_expected from the header, total savings ca. 9 bytes per RPC.

Switched back from DatagramSocket to MulticastSocket for sending of IP multicasts

[https://issues.jboss.org/browse/JGRP-1970]
This caused some issues in MacOS based systems: when the routing table was not setup correctly, multicasting would not work (nodes wouldn't find each other).
Also, on Windows, IPv6 wouldn't work: https://github.com/belaban/JGroups/wiki/FAQ.

Make the default number of headers in a message configurable

[https://issues.jboss.org/browse/JGRP-1985]
The default was 3 (changed to 4 now) and if we had more headers, then the headers array needed to be resized (unneeded memory allocation).

Message bundling

[https://issues.jboss.org/browse/JGRP-1986]
When the threshold of the send queue was exceeded, the bundler thread would send messages one-by-one, leading to bad performance.

TransferQueueBundler: switch to array from linked list for queue

[https://issues.jboss.org/browse/JGRP-1987]
Less memory allocation overhead.

Bug fixes

SASL now handles merges correctly

[https://issues.jboss.org/browse/JGRP-1967]

FRAG2: message corruption when thread pools are disabled

[https://issues.jboss.org/browse/JGRP-1973]

Discovery leaks responses

[https://issues.jboss.org/browse/JGRP-1983]



Manual

The manual is at http://www.jgroups.org/manual/index.html.

The complete list of features and bug fixes can be found at http://jira.jboss.com/jira/browse/JGRP.