Dr. Augustine Fou, Group Chief Digital Officer, FouAnalytics
Dr. Augustine Fou
Group Chief Digital Officer

Dr. Augustine Fou has been on the front lines of digital marketing for 25 years. It is from that vantage point that he studied and documented the nexus of cybercrime and ad fraud. As an investigator, Dr. Fou assists government and regulatory bodies; as a consultant he helps clients strengthen cybersecurity, and mitigate threats and risks, including the flow of ad dollars that fund further criminal activity. Dr. Fou was the former Group Chief Digital Officer of Omnicom's Healthcare Consultancy Group, a $100 million group of 8 agencies serving pharma, medical device, and healthcare clients. Dr. Fou also taught digital strategy and integrated marketing at Rutgers University and NYU's School of Continuing and Professional Studies. Dr. Fou completed his PhD at MIT in Materials Science and Engineering at the age of 23. He started his career with McKinsey & Company and previously served as SVP, digital strategy lead at McCann/MRM Worldwide.

This article is by Featured Blogger Dr. Augustine Fou  from his Blog Page. Republished with the author’s permission. 

Cybersecurity is more and more important because we are spending more and more of our lives online and using connected devices. Of course different devices and activities require different levels of security. However, most online authentication still revolves around passwords, which are notoriously weak due to misaligned incentives -- i.e. the user voluntarily chooses shorter, easier-to-remember passwords, which means they are easier-to-guess too. Algorithms can make millions of “guesses” in a second in so-called “credential stuffing” attacks.  

Something you know - passwords 

Something you have - device, security key 

Something you are - biometrics 

This led to the increasing adoption of two-factor authentication (2FA), which brings in an element of “what you have” in addition to the “what you know” (passwords). But most online 2FA revolves around codes that are sent through SMS and valid for a short time window. Unfortunately, it’s well documented that rogue apps on phones can intercept SMS messages and easily defeat this form of security. The user might have accidentally given permissions to read and write SMS when installing the app, or the user might have knowingly given such permissions to free texting and chatting apps (ever wonder why there are so many free texting and chatting apps when everyone already has texting and chatting built into their phones already?).  

And for even greater security, some devices are adding “what you are” features like fingerprint readers, facial recognition (e.g. Apple FaceID), and iris scanners. While more secure in theory, criminals have found easy and clever ways to defeat biometric security too. They can either buy such data from others or simply make “viral apps” which trick gullible humans into voluntarily taking a picture of their own face with the app for frivolous things like “cartoonify your face.” And remember all those “which celebrity do you look like” or “what will you look like when you’re old” apps? Right, users volunteered their own faces.  











Aside from the various nefarious and clever ways bad guys can easily defeat current forms of online security, what if users valued their privacy more and preferred not to use their fingerprints or irises for authentication? Further, what if we assumed attackers had unlimited computing power (powerful servers, parallel processing GPUs, and dedicated ASICs) to attack your security infrastructure. Even long passwords won’t safeguard you; 2FA through networks they can access won’t safeguard you; and faces and biometrics can also be defeated given enough time.  

I have also said previously that any security schema that relies on secrets is not good enough. That’s because those secrets can be compromised via “pain or gain.” We’ve all seen enough movies to know that attackers can torture passwords out of secret-holders (pain) or they can offer them large sums of money to disclose the secret (gain). So security systems that rely on secrets will still be subject to the perennial weakest link -- humans. That’s assuming there was no human error that accidentally exposed everything already (DB’s left unsecured in the cloud) or some human plugging a USB thumb drive they found on the street into an air-gapped computer at a nuclear facility (Stuxnet). Finally, any security schema that can be compromised by human laziness is not good enough -- if you make humans do too much work to secure things, over time, they will find work arounds and time savers that defeat the entire security schema.  

So we are left with the following three-pronged challenge: 1) assume attackers have unlimited computing power to bring to bear to defeat your security, 2) humans are inherently lazy and forgetful so security schema should not rely on them to do too much work otherwise the schemas are inherently weak, and 3) biometrics like facial recognition are not desirable due to long term privacy implications (someone else has to have all your faces in a database so they can use them for security; that is not secure in and of itself).  

How would you solve this security challenge? I will lay out how we do it in my next article. In the meantime, here are some hints.