Blockchain development from scratch

by Vishwas Bhushan

We are looking into Stellar and parallely keeping eye on how to develop our own blockchain either by forking one or by developing from scratch. While reading Stellar white paper, I am getting a lot of keywords related to our research work. One of them is Tendermint.

Tendermint core is BFT middle ware that takes a state transition machine - written in any programming language - and securely replicates it on many machine - blockchains. It is low-level engine based on BFT protocol and its being used as a development kit for building blockchains. Other doc related to Tendermint is here. Paper related to BFT is here.

Some notes on Tendermint:

Tendermint consists of two chief technical components: a blockchain consensus engine and a generic application interface. The consensus engine, called Tendermint Core, ensures that the same transactions are recorded on every machine in the same order. The application interface, called the Application BlockChain Interface (ABCI), enables the transactions to be processed in any programming language

What blockchains are built on Tendermint so far?

A lot of them. for example, Hyperledger Fabric, Hyperledger Burrow, Cosmos, Ethermint etc.

Article: Mobile designers will shape the future of 3D application design

by Srikumar Subramanian

“The first great mobile AR experiences will be built by people who already know 2D design tools and workflows.”

https://www.invisionapp.com/blog/ux-designers-mobile-ar/

There are five major challenges that make up the Wall of Pain.

  1. Learning new tools and developing workarounds.
  2. Cumbersome communication between designers, developers, and other stakeholders.
  3. Obtaining and creating 3D assets.
  4. Rapid prototyping that’s anything but rapid.
  5. The challenges of sharing work.

RANDAO

by Srikumar Subramanian

Generating random numbers in a deterministic virtual machine like Ethereum is a hard problem. RANDAO is a “distributed autonomous organization” that serves up random numbers provided by its contract users. The gist is that it invites users to pledge an amount and provide a hashed secret random number, and after a period reveal the secret number. The random number itself is generated from a collection of random numbers during a round. See the github repo of RANDAO for details.

A survey of text clustering algorithms

by Srikumar Subramanian

A survey of text clustering algorithms by Aggarwal and Zhai (link to pdf).

Covers TF/IDF, Latent Semantic Indexing, non-negative matrix factorization, distance based and hierarchical clustering, hybrid scatter-gather method, word and phrase based clustering, co-clustering words and documents, graph clustering, information theoretic approaches, LDA (latent Dirichlet allocation) based topic modeling, online (streaming) clustering and semi-supervised clustering.

Universal Dependencies

by Srikumar Subramanian

Universal Dependencies is a project that is developing cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective.

Avoiding nausea in Virtual Reality

by Srikumar Subramanian

VR tech’s Achilles’ heel has been the tendency to leave users giddy or nauseous at the end of an experience. If you want folks to have a pleasant experience, you need to stick to the following design principle -

NEVER perform any camera movement in the VR space that isn’t also a movement done by the user.

In practice, this means a) you can do head tracking turns, b) you can do manipulative actions on the scene objects, c) you cannot set the camera in linear or other motion, especially accelerating motion, even if it is in response to a controller event. Make sure what your vestibular system knows and what your eyes see always agree with each other.

This is a severe constraint, but if you absolutely need the maximum reach, there is no way around it that we know of so far … unless you have gravity warping technology.

Algorand: Scaling Byzantine Agreements for Cryptocurrencies

by Srikumar Subramanian

Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, Nickolai Zeldovich
MIT CSAIL
Original link to paper

Abstract

Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.

Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of trans- actions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Func- tions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand’s BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.

We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transac- tions in under a minute, achieves 125× Bitcoin’s throughput, and incurs almost no penalty for scaling to more users.

Energy consumption

by Srikumar Subramanian

Han, Mao and Dally talk about energy consumption in their paper titled Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

“Energy consumption is dominated by memory access. Under 45nm CMOS technology, a 32 bit floating point add consumes 0.9pJ, a 32bit SRAM cache access takes 5pJ, while a 32bit DRAM memory access takes 640pJ, which is 3 orders of magnitude of an add operation. Large networks do not fit in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20fps would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device.”