Dr. Augustine Fou, Group Chief Digital Officer, FouAnalytics
Dr. Augustine Fou
Group Chief Digital Officer

Dr. Augustine Fou has been on the front lines of digital marketing for 25 years. It is from that vantage point that he studied and documented the nexus of cybercrime and ad fraud. As an investigator, Dr. Fou assists government and regulatory bodies; as a consultant he helps clients strengthen cybersecurity, and mitigate threats and risks, including the flow of ad dollars that fund further criminal activity. Dr. Fou was the former Group Chief Digital Officer of Omnicom's Healthcare Consultancy Group, a $100 million group of 8 agencies serving pharma, medical device, and healthcare clients. Dr. Fou also taught digital strategy and integrated marketing at Rutgers University and NYU's School of Continuing and Professional Studies. Dr. Fou completed his PhD at MIT in Materials Science and Engineering at the age of 23. He started his career with McKinsey & Company and previously served as SVP, digital strategy lead at McCann/MRM Worldwide.

This article is by Featured Blogger Dr. Augustine Fou  from his Blog Page. Republished with the author’s permission. 

Picking up where we left off last time. How do we solve the following three-pronged challenge with respect to cybersecurity -- 1) assume attackers have unlimited computing power, which means an escalating arms race will be futile, 2) assume humans are lazy and forgetful and don’t want to do too much work to secure themselves, and 3) security schema should not rely on “what you are” (e.g. fingerprints, irises, DNA, etc.) because that requires trusting someone else with those intimate details -- and they can be compromised via “pain or gain.” A privacy advocate added, "a person can't change their fingerprints, irises, or DNA if those are compromised; and you know how often we hear about large-scale data breaches these days. It's imperative we keep these intimate details truly secret and in the hands of as few parties as possible, preferably no parties other than the person to whom they belong." 

In Time 

Some of the current security schema have introduced an element of time into the equation. For example SMS based 2FA codes expire after a certain number of minutes. So if the attacker does not intercept and use them in time they will fail to defeat the security. Other more security mechanisms like Google Authenticator also rely on time -- there is a certain code that lives for a specific amount of time before it becomes invalid and another code replaces it. This is more secure than SMS based 2FA because the codes are not so easily interceptable at the network level or by other rogue apps installed on the device. The additional security gained by using codes that expire in time does not require human users to do any extra work -- like remembering longer passwords -- so there won’t be the large barrier to adoption known as human laziness.  

In Space 

Another security check that can be built in is geospatial location. For example, if a user’s device is normally in a certain city and all of a sudden it appears to be attempting to log in from another city or country, some further checks need to be done. Of course the person could simply be on vacation in the other city; but it could also be a credential stuffing attack launched from a data center or a compromised device in another location. Google services already have this built in; you may have seen a prompt asking you to verify the device and login if Google sees it is coming from a different geolocation. Also, the same login from a different device or device type doesn’t happen that often -- i.e. humans don’t upgrade their computers daily or add logins to dozens of other devices quickly). Again, simple checks based on geospatial location can increase security without asking the human to do more work.  

Over Time 

Finally, combining elements of the above -- time and space -- some security schema looks at the location of the device over time. For example, if a user is not traveling for work or vacation, their device moves around their hometown in relatively predictable patterns.. They go to work in the morning, stay at the office during business hours, and they go home at night, usually commuting in the same way day after day. When they get home, they may visit many of the same websites, use a very consistent set of apps, and watch many of the same streaming channels. These patterns over time are relatively unique to the individual and relatively repeatable over time. This is how ad tech companies profile users and uniquely identify them, although the data was likely collected without their knowledge or consent, using hidden third party (“3P”) trackers. This is also a useful addition to security schema because attackers’ devices are unlikely to have the same pattern of activities over time - so devices used for attacks can be distinguished from the normal activity of devices. For example, if a device attempts to log in hundreds of times per hour, that is unusual. If a device tries to log in as dozens of different people, that’s unusual too, and it can be flagged.  

Our approach to cybersecurity 

Given the lessons from all of the above, our approach to cybersecurity is a combination of all of the above -- it assumes attackers have unlimited computing power, humans are too lazy to protect themselves well, and we don’t want to use intimate personal things like iris scans to avoid negative privacy implications. I will use some real-life products and examples of how we have built in layers of cybersecurity -- “practice what we preach” as it were.  

Single-use identifiers. In FouAnalytics, we assume attackers will try to compromise not only the tech platform, but also to corrupt the data in it. This is why each javascript tag is universally unique and single-use. We make it unique by adding an element of time into it; and as far as I know in our current known universe time doesn’t repeat. Each code is also single-use, which means it cannot be copied and replayed. This defeats the form of attack where fraudsters copy a previously validated device identifier and use that to defeat fraud detection which assumes a validated deviceID remains valid.  

Three-minute listening window. Each javascript tag not only carries a unique identifier, it is also timestamped and locked to a specific write-back server. Specifically, there is a three-minute window in which we listen for data, after which we stop listening. Also, only a specific server in our infrastructure will listen to that specific measurement tag. That way, attackers attempting to corrupt data will have a much greater challenge writing false data to the right server, within the right three-minute window. Even if they did, we can easily see that the data of that single visit/impression was corrupted and discard it. The FouAnalytics infrastructure does not listen to any other attempts to write data to it; it only listens to our own javascript tags sent out from our servers.  

Combinatorial security.  In crpt.info, a secure messaging service we built in 2013, we rolled our own security, known as combinatorial security. We do not use any standard encryption libraries because we assume those are already compromised with NSA backdoors, and even when used, they are defeated by the unlimited computing power that attackers can bring to bear. Combinatorial security is based on the following: 1) sharding each message into an unknown number of shards, 2) encrypting each shard with an unknown reversible mathematical transform, 3) storing each shard on one of an unknown number of cloud servers, and 4) storing each shard in a temporary database table or filesystem subdirectory that uses an alphanumeric string of unknown length. Each message has its own combinatorial security, which is never the same, so breaking the security of one message does not help the attacker break the next. They have to start over from scratch. Finally, the message itself and all its shards only exist between the time it is sent and the time it is picked up. Once retrieved by the intended recipient, all the shards evaporate. This means the attacker has to complete all interception, decryption, and re-assembly before everything disappears and they have to start over. They are much more likely to run out of time, than they are to be successful in breaking the security and retrieving the contents of that one message. It also becomes a simple tradeoff for the attacker -- “does the value of the one intercepted message outweigh the cost of the computing power (including quantum computing) needed, with no guarantee of success?” 

On-device trust history. In addition to the above combinatorial security schema for each message, we have built on-device trust history. This is like Apple’s recent announcement that in iOS 15 all voice recognition will be done “on-device” so no voice data needs to leave the device, go to the cloud, and potentially be intercepted or compromised. A trust history is built up for the device based on combinations of parameters measured by device sensors, including geolocation, gyroscope, ambient light and noise, elevation and gravity, and many others. This data is reduced to “combo signals” which represent the device, but never leave the device. Each device has its own unique “trust history” which are the combo signals over time. Various security schema can leverage trust history without ever needing biometric signals from the user or needing the user to do any additional work.  

So What? 

Hopefully this gives you a glimpse of our approach to cybersecurity, a few of the products and services that have these principles and technologies built in over the last 2 decades, and how we can provide cybersecurity that solves for the triple challenges mentioned above -- attackers have unlimited computing power, humans are lazy and easily compromised, and privacy must be taken into account as well.