RANDAO

by Srikumar Subramanian

Generating random numbers in a deterministic virtual machine like Ethereum is a hard problem. RANDAO is a “distributed autonomous organization” that serves up random numbers provided by its contract users. The gist is that it invites users to pledge an amount and provide a hashed secret random number, and after a period reveal the secret number. The random number itself is generated from a collection of random numbers during a round. See the github repo of RANDAO for details.

A survey of text clustering algorithms

by Srikumar Subramanian

A survey of text clustering algorithms by Aggarwal and Zhai (link to pdf).

Covers TF/IDF, Latent Semantic Indexing, non-negative matrix factorization, distance based and hierarchical clustering, hybrid scatter-gather method, word and phrase based clustering, co-clustering words and documents, graph clustering, information theoretic approaches, LDA (latent Dirichlet allocation) based topic modeling, online (streaming) clustering and semi-supervised clustering.

Universal Dependencies

by Srikumar Subramanian

Universal Dependencies is a project that is developing cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective.

Avoiding nausea in Virtual Reality

by Srikumar Subramanian

VR tech’s Achilles’ heel has been the tendency to leave users giddy or nauseous at the end of an experience. If you want folks to have a pleasant experience, you need to stick to the following design principle -

NEVER perform any camera movement in the VR space that isn’t also a movement done by the user.

In practice, this means a) you can do head tracking turns, b) you can do manipulative actions on the scene objects, c) you cannot set the camera in linear or other motion, especially accelerating motion, even if it is in response to a controller event. Make sure what your vestibular system knows and what your eyes see always agree with each other.

This is a severe constraint, but if you absolutely need the maximum reach, there is no way around it that we know of so far … unless you have gravity warping technology.

Algorand: Scaling Byzantine Agreements for Cryptocurrencies

by Srikumar Subramanian

Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, Nickolai Zeldovich
MIT CSAIL
Original link to paper

Abstract

Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.

Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of trans- actions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Func- tions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand’s BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.

We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transac- tions in under a minute, achieves 125× Bitcoin’s throughput, and incurs almost no penalty for scaling to more users.

Energy consumption

by Srikumar Subramanian

Han, Mao and Dally talk about energy consumption in their paper titled Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

“Energy consumption is dominated by memory access. Under 45nm CMOS technology, a 32 bit floating point add consumes 0.9pJ, a 32bit SRAM cache access takes 5pJ, while a 32bit DRAM memory access takes 640pJ, which is 3 orders of magnitude of an add operation. Large networks do not fit in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20fps would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device.”