Is Your Consent Real? Rethinking Data Protection in the Age of AI and Smart Devices
Why It Matters
The smart technologies we rely on every day – from fitness trackers to AI chatbots – collect personal data around the clock. But is the consent we give actually meaningful, or just a ticked box?
Key Takeaways
- Consent today often fails to meet the legal and ethical standard of being “free and informed” due to complex notices, limited choices, and opaque technologies.
- AI and IoT have outpaced current data protection frameworks, making it harder for individuals to control how their data is used.
- The solution may lie in building privacy into technology from the start, not relying on users to understand or manage complex data decisions.
Why Consent, Transparency and Accountability Are Interlinked
Consent is at the heart of data protection – it is meant to give individuals control over how their information is used. But true consent must be voluntary, informed, and purposeful. This only works if users clearly understand what they’re agreeing to. That’s where transparency comes in: if companies don’t disclose what data they collect and why, consent can’t be meaningful.
Global privacy laws – from the EU’s GDPR to Singapore and India’s data protection laws – are built on this foundation. They require organisations to tell people how their data will be used and to be accountable for following through. Without transparency, people are left in the dark, and without accountability, there’s little recourse when consent is ignored or misused.
Why Consent is Failing in the Digital Age
In practice, most people don’t read the small print. Privacy policies are dense, filled with legal jargon, and often appear at inconvenient moments – like when you’re trying to install an app or complete a transaction. Even if you did read them, what real choice do you have? Say no, and you can’t use the service. This isn’t consent – it’s coercion.
The situation is worse with AI and Internet of Things (IoT) devices. Smart speakers, wearables and connected appliances collect data passively, often without a screen or interface to ask for your permission. AI systems, meanwhile, constantly evolve – learning from data and changing their behaviour. That means even if you agreed to one use of your data, the way it's processed can change in ways you never anticipated.
Designing Privacy into Technology
To address these gaps, the article calls for a radical rethink: shift from “notice and consent” to “privacy by design and default.” This means building privacy features into the technology itself — from how a device is designed to how algorithms are trained. For example, smart home devices could be set to collect the minimum data needed unless the user explicitly changes the settings.
The EU’s General Data Protection Regulation includes this principle, but it mostly applies to organisations that use the technology (called data controllers). The problem is that many organisations don’t build the tech – they just use it. So if an AI tool they rely on makes a mistake, they’re accountable, even though they may not fully understand how it works. That’s especially concerning with “black box” AI systems that can’t explain their decisions.
Business Implications
For businesses, this shift represents both a challenge and an opportunity. Companies must go beyond ticking compliance boxes – they need to choose technologies that prioritise privacy and understand how these tools collect and use personal data. That may mean demanding greater transparency from tech vendors or investing in privacy-aware product design.
It also suggests a growing role for regulators. The EU’s new AI Act, for instance, introduces design obligations for developers of high-risk AI systems. But most AI applications fall outside this high-risk category. As such, there’s still a large gap in ensuring technologies are safe, fair, and privacy-conscious by default.
Forward-looking companies can lead the way by embracing these standards early – demonstrating trustworthiness in a world where consumers and journalists are increasingly aware of data ethics. Those who fail to act may find themselves on the wrong side of public opinion – and regulation.
Authors and Sources
Authors: Althaf Marsoof (Nanyang Technological University) and Indranath Gupta (O.P. Jindal Global University)
Original article: Edward Elgar
For more research, click here to return to NBS Knowledge Lab.