January 2017

Odd Passenger: Solving User Authentication via Artificial Intelligence, Pt. II

odd_passenger

—–BEGIN HASHED POST BLOCK—–

Part II: Networked Smart Contracts

Since the beginning of my winter break from Columbia SIPA, I’ve had the opportunity to do some fairly extensive dev work on some personal smart contract projects. Among my aims was exploring the possibility of networked smart contracts (“NSC”). By NSC, I am not referring simply to contracts with public function calls of their peers, but rather to contracts that make public function calls to other contracts on other chains.

Currently, contracts written in Solidity have external visibility, and function parameters can be declared so that they may be accessed externally by other contracts. Should a contract need to pull in a function that is defined elsewhere, it can do so by creating an EVM function call and have that brought over to the current contract that is being executed. This has obvious advantages as some contracts may be complex and require segmentation for organization, or more practically, to ensure that gas is spent in an efficient and easily-auditable manner.

An oversimplified call can be seen in the following example:

However, the downsides of this system is that contracts can only reference others if they are operating on the same blockchain, such as ethereum. For now, given the relative infancy of the technology, this isn’t a problem as smart contracts are comparatively simple relative to their physical counterparts, and don’t require massive computations or complex logic. This will eventually change as time progresses and blockchains are incorporated into and built by more industries.

The reason that this functionality should be implemented is due to this future adoption, as well as the high likelihood that proprietary chains will be built to handle different processes and solve different issues. While a number of companies and projects are attempting to lead the future of blockchain development and design what will be the eventual standards of blockchain, there isn’t mass adoption of one set (even though there is certainly heavy sway towards technologies such as ethereum). Over the past year, I’ve consulted on a number of different blockchain projects and have identified a common theme in my work: while everyone wants their own “chain” to satisfy the same requirements as their peers, several clients (and especially larger institutions) need some intercompatibility with their partners. If everyone is using different blockchains, we’ve accomplished little more than overcomplicating extant proven systems.

Eventually what I see happening is that the number of available and competing blockchain technologies will get smaller. Hobby projects will eventually get abandoned, some of the more innovative chains produced by smaller firms and startups will be acquired, and several institutional collaborative efforts (such as Hyperledger) will remain as the enterprise-class options. Further, I envision that multiple chains will still exist, and this is especially true in industries where larger organizations have a good argument for implementing their own proprietary versions of blockchains (such as the financial industry).

Going back to my original point, there is no current way for these various blockchains to talk to one another. One might argue that the security provided by decentralization obviates the need for multiple blockchain implementations, and that all data that might need to be transacted on a blockchain could be transacted on a single chain. There are obvious disincentives. For example, in a highly-regulated industry such as banking, financial institutions would need their own ledgers to ensure adherence to incredibly complex auditing requirements. Organizations that require exceptional security such as governments might need to run their data in-house with each agency maintaining their own chain. Instances even as simple as a separation between corporate sales and manufacturing departments might have a decent argument to maintain separate blockchains. If chains are implemented independently, we need to design protocols and standards for NSC to facilitate secure and reliable communication and connectivity between them.

It’s important that the blockchain community writ large makes two important policy decisions. First, the community needs to decide on a clear and defined set of standards to abide by in the development of various blockchains. This is difficult given that interested companies, project collaborations, and open source efforts are constantly expanding the limits and possibilities of the technology. Everyone is moving outward in various directions to be first-to-market with their respective inventions and improvements. However, those moving in substantially similar or even parallel directions need to agree on a set of structural standards to ensure that if both chains get adopted, there are commonalities between them. An example of this is the Bitcoin Improvement Proposal (“BIP”) process where standards for Bitcoin go through a lengthy path of submission, evaluation, and implementation.

Second, in these standards, the blockchain community needs to implement robust communication protocols for these chains to communicate with one another. Chains will eventually replace many legacy systems but will likely do so independently rather than cumulatively, and they will need to be capable of “smart” conversations with their counterparts. There will be an eventual need to have movement by and between chains, either in the generation of new contracts on other chains, or simultaneous execution of contract terms on chains implemented parallel to one another.

If blockchain truly is the future (which I firmly contend that it is), the community should also look ahead to future challenges for the technology. Regardless of how many chains will exist within the ecosystem, they will eventually need the ability to interact with each other to guarantee advanced functionality. Designing blockchain products that are completely isolated from one another may seem like good business practices today, and may help businesses realize significant market shares, but it will hinder the technology’s overall potential going forward.

If you have feedback on this idea, I’d love to hear it! I can be reached via cvdw@werken.com.

—–END HASHED POST BLOCK—–

Featured Image Credit: Gerd Altmann, “untitled”. Public Domain Work. Modified, Desaturated.

SHA256(e2df40019a65ead380eaf365016d0f7a-b9b179454f3348a9adfc187c57a9c6cff15e6436.zip)= 09055f5b819086ada38c6e20654df5c7eb28944d18491a7c966822adefea14e8

Odd Passenger: Solving User Authentication via Artificial Intelligence, Pt. I

odd_passenger

—–BEGIN HASHED POST BLOCK—–

Part I: An Argument For Enhanced Data Logging in Machine Learning

One of the discussions I’ve recently found myself regularly engaged in has been focused on the dichotomy of privacy and machine learning (“ML”). These two concepts have been related for several decades, however they haven’t occupied the same discussion until recently — especially with the emergence of mass market IoT, advanced predictive analytical models, and innovations in consumer tracking. Until the last few years, it was possible to maintain separation between privacy and envisioning a future vision for “smart life” automation because people’s individual lives weren’t mined to make models more sophisticated and efficient. Previously, recorded voice from smartphones wasn’t used for pattern analysis [1]. Text messages, emails, and search history wasn’t automatically skimmed to create composite pictures [2, 3] of pertinent routines and daily events. Privacy policies originally required information submissions to be opt-in rather than opt-out. This trend towards reduced privacy has yielded a related inverse reaction in the growth and capacity of machine learning.

Before I begin this multi-part post, I want to note that I’m not an advocate for a reduction in privacy — quite the opposite actually. I believe in maintaining privacy on the internet and in devices that I use to access it. I believe in strong end-to-end crypto and security of communications technologies. I further believe in getting the opportunity to enjoy abstraction from data that I generate.

That being said, I’m also a strong proponent for improvements to and innovations of machine learning, especially as it pertains to the possibility of truly independent and functional artificial intelligence (“AI”). I believe that ML will eventually lead to robust AI, and that the collection of some data inherently advances this goal.

Unfortunately, these two concepts are paradoxically intertwined. It is not possible to simultaneously have explosive growth in machine learning and complete privacy. Devices need to thoroughly understand and interact with user data in order to expand the abilities of ML. However, to do so requires a reduction in the overall state of privacy protections that we enjoy today because these devices will need to learn about us to become better at working for us. Regardless of whether it is an organization or a singular device looking at data to analyze it, at some extent privacy is being scaled back in order to enable some type of function or benefit. We haven’t yet reached a balancing point between the two that affords adequate protection while enabling and incentivizing innovation. Organizations, industries, and governments are interspersed through the privacy spectrum and many today find themselves positioned at the extremes. However the only viable option to achieve the future of both is to remain somewhere near the center today.

It is important for the evolution of ML that we step back from absolute privacy and consider the possibility of living in a majority-private society. In such, devices and systems are allowed to learn about us in an independent and general sense, but not report back specifics that can be used to individually target us. This is not an argument for backdoors, increased surveillance, or the erosion of privacy protections, but rather an argument for a posture that increases innovation with relatively fledgling technologies such as AI. In order for artificial intelligence to move into the next phase of its evolution, the consumer population needs to become more comfortable with sharing some information with systems designed to analyze it. This has largely already occurred given the massive adoption of services like Google Maps (which uses satellites and cell towers to geolocate devices), Amazon Alexa (which listens to those talking within the vicinity of the device and relays speech to servers such as Lex to process [4]), among countless others. When I think of the long-term possibilities achievable by AI, I firmly believe they outweigh a brief and relatively superficial fluctuation in privacy frameworks.

This brings me to my main focus of this series: incentivizing the evolution of artificial intelligence to power the still-flawed problem of digital authentication.

We currently live at a time where identity and user authentication are inherently flawed. For someone to access a system or device, they must prove their identity or otherwise undergo some process to authenticate themselves. The most common mechanism is a password, although there have been notable improvements over the past several years to include keystroke biometrics [5, 6], wearables [7], among others. Unfortunately, all of these solutions still place the burden of identification onto the most insecure part of the process, the user. Passwords can be reused or insecurely created, wearables can be used without the user’s consent, and some types of biometrics can be forged. For each of these solutions, it is always up to the user to maintain the integrity, security, and possession of their authentication mediums. For most users, this is simply not practical as the sheer number of instances requiring different forms of access is too large to manage.

I contend that authentication of the future should not rely on users at all, but rather should be based off of artificial intelligence. Rather than continually try to improve inferior authentication mechanisms, there should be an effort towards developing completely secure systems that are aware of their users and know when and where to authenticate them. These systems would know enough about their users to grant them access to the same types of mediums where passwords generally suffice today. This is a realistic possibility over the next several decades if we transition as a society along the privacy spectrum to allow for the evolution of artificial intelligence.

Over the next few posts, I want to explore the possibility of creating AI authentication mechanisms, a project that I’ve codenamed here as Odd Passenger, and will from time to time refer to as Authentication by AI (“AAI”). I plan to identify and evaluate the technologies that I find pertinent to the concept, and present an idea that will hopefully turn into a future viable solution.

If you have feedback on this idea, I’d love to hear it! I can be reached via cvdw@werken.com.

—–END HASHED POST BLOCK—–

Featured Image Credit: Gerd Altmann, “untitled”. Public Domain Work. Modified, Desaturated.

SHA256(cb05ac6f578f97f14113553cfe5e990a-74e844ede81bdb75dbac0596c14a3db7fc1d1ba4.zip)= 06ae1a7b60bffa18f5d4f0e716e811ff91e26a827fdb924367ad95fa3edab15a