Odd Passenger: Solving User Authentication via Artificial Intelligence, Pt. I

odd_passenger

—–BEGIN HASHED POST BLOCK—–

Part I: An Argument For Enhanced Data Logging in Machine Learning

One of the discussions I’ve recently found myself regularly engaged in has been focused on the dichotomy of privacy and machine learning (“ML”). These two concepts have been related for several decades, however they haven’t occupied the same discussion until recently — especially with the emergence of mass market IoT, advanced predictive analytical models, and innovations in consumer tracking. Until the last few years, it was possible to maintain separation between privacy and envisioning a future vision for “smart life” automation because people’s individual lives weren’t mined to make models more sophisticated and efficient. Previously, recorded voice from smartphones wasn’t used for pattern analysis [1]. Text messages, emails, and search history wasn’t automatically skimmed to create composite pictures [2, 3] of pertinent routines and daily events. Privacy policies originally required information submissions to be opt-in rather than opt-out. This trend towards reduced privacy has yielded a related inverse reaction in the growth and capacity of machine learning.

Before I begin this multi-part post, I want to note that I’m not an advocate for a reduction in privacy — quite the opposite actually. I believe in maintaining privacy on the internet and in devices that I use to access it. I believe in strong end-to-end crypto and security of communications technologies. I further believe in getting the opportunity to enjoy abstraction from data that I generate.

That being said, I’m also a strong proponent for improvements to and innovations of machine learning, especially as it pertains to the possibility of truly independent and functional artificial intelligence (“AI”). I believe that ML will eventually lead to robust AI, and that the collection of some data inherently advances this goal.

Unfortunately, these two concepts are paradoxically intertwined. It is not possible to simultaneously have explosive growth in machine learning and complete privacy. Devices need to thoroughly understand and interact with user data in order to expand the abilities of ML. However, to do so requires a reduction in the overall state of privacy protections that we enjoy today because these devices will need to learn about us to become better at working for us. Regardless of whether it is an organization or a singular device looking at data to analyze it, at some extent privacy is being scaled back in order to enable some type of function or benefit. We haven’t yet reached a balancing point between the two that affords adequate protection while enabling and incentivizing innovation. Organizations, industries, and governments are interspersed through the privacy spectrum and many today find themselves positioned at the extremes. However the only viable option to achieve the future of both is to remain somewhere near the center today.

It is important for the evolution of ML that we step back from absolute privacy and consider the possibility of living in a majority-private society. In such, devices and systems are allowed to learn about us in an independent and general sense, but not report back specifics that can be used to individually target us. This is not an argument for backdoors, increased surveillance, or the erosion of privacy protections, but rather an argument for a posture that increases innovation with relatively fledgling technologies such as AI. In order for artificial intelligence to move into the next phase of its evolution, the consumer population needs to become more comfortable with sharing some information with systems designed to analyze it. This has largely already occurred given the massive adoption of services like Google Maps (which uses satellites and cell towers to geolocate devices), Amazon Alexa (which listens to those talking within the vicinity of the device and relays speech to servers such as Lex to process [4]), among countless others. When I think of the long-term possibilities achievable by AI, I firmly believe they outweigh a brief and relatively superficial fluctuation in privacy frameworks.

This brings me to my main focus of this series: incentivizing the evolution of artificial intelligence to power the still-flawed problem of digital authentication.

We currently live at a time where identity and user authentication are inherently flawed. For someone to access a system or device, they must prove their identity or otherwise undergo some process to authenticate themselves. The most common mechanism is a password, although there have been notable improvements over the past several years to include keystroke biometrics [5, 6], wearables [7], among others. Unfortunately, all of these solutions still place the burden of identification onto the most insecure part of the process, the user. Passwords can be reused or insecurely created, wearables can be used without the user’s consent, and some types of biometrics can be forged. For each of these solutions, it is always up to the user to maintain the integrity, security, and possession of their authentication mediums. For most users, this is simply not practical as the sheer number of instances requiring different forms of access is too large to manage.

I contend that authentication of the future should not rely on users at all, but rather should be based off of artificial intelligence. Rather than continually try to improve inferior authentication mechanisms, there should be an effort towards developing completely secure systems that are aware of their users and know when and where to authenticate them. These systems would know enough about their users to grant them access to the same types of mediums where passwords generally suffice today. This is a realistic possibility over the next several decades if we transition as a society along the privacy spectrum to allow for the evolution of artificial intelligence.

Over the next few posts, I want to explore the possibility of creating AI authentication mechanisms, a project that I’ve codenamed here as Odd Passenger, and will from time to time refer to as Authentication by AI (“AAI”). I plan to identify and evaluate the technologies that I find pertinent to the concept, and present an idea that will hopefully turn into a future viable solution.

If you have feedback on this idea, I’d love to hear it! I can be reached via cvdw@werken.com.

—–END HASHED POST BLOCK—–

Featured Image Credit: Gerd Altmann, “untitled”. Public Domain Work. Modified, Desaturated.

SHA256(cb05ac6f578f97f14113553cfe5e990a-74e844ede81bdb75dbac0596c14a3db7fc1d1ba4.zip)= 06ae1a7b60bffa18f5d4f0e716e811ff91e26a827fdb924367ad95fa3edab15a

Leave a Reply

Your email address will not be published. Required fields are marked *