iSPIRT Final Comments on India’s Personal Data Protection Bill

Below represents iSPIRT’s comments and recommendations on the draft Personal Data Protection Bill.  iSPIRT’s overall data privacy and data empowerment philosophy is covered here.  

Table of Contents

Major Comments
1. Include Consent Dashboards
2. Financial Understanding and Informed Consent for all Indians
3. Data Fiduciary Trust Scores Similar to App Store Ratings
4. Comments & Complaints on Data Fiduciaries are Public, Aggregatable Data
5. Warn of Potential Credit and Reputation Hazards
6. A Right to View and Edit Inferred Personal Data
7. Sharing and Processing of Health Data

Suggestions and Questions

  • Fund Data Rights Education
  • Limit Impact Assessment Requirement
  • Passwords should be treated differently than other Sensitive Personal Data.
  • Does the Bill intend to ban automatic person-tagging in photos and image search of people?
  • Notifications about updates to personal data should be handled by a Consent Dashboard, not every data fiduciary.
  • Need for an Authority appeal process when data principal rights conflict
  • Do not outlaw private fraud detection
  • Limit record keeping use and disclosure to the Authority and the company itself.
  • Fillings may be performed digitally
  • Request for Definition Clarifications
  • Author Comments
  • Links
  • Appendix – Sample User Interface Screens

Major Comments

1. Include Consent Dashboards

We support the idea of a Consent Dashboard as suggested in the Data Protection Committee Report (page 38) and recommend it to be incorporated in the Bill in Section 26 – Right to Data Portability and Section 30 (2) Transparency.  

We envision all of a user’s personal and inferred data that is known by data fiduciaries (i.e. companies) being exposed on a consent dashboard, provided by a third party consent collector or account aggregator (to use the RBI’s parlance). Below is an example user interface:

This mandate would enable users to have one place – their consent collector-provided dashboard – to discover, view and edit all data about them. It would also allow users to see any pending, approved and denied data requests.

Furthermore, in the event of data breaches, especially when a user’s password and identifier (mobile, email, etc) have been compromised, the breach and recommended action steps could be made clear on the consent dashboard.

Given the scope of this suggestion, we recommend an iterative or domain specific approach, wherein financial data is first listed in a dashboard limited to financial data and for its scope to grow with time.

2. Financial Understanding and Informed Consent for all Indians

We applaud the Bill’s Right to Confirmation and Access (Chapter IV, Section 24):

The data fiduciary shall provide the information as required under this section to the data principal in a clear and concise manner that is easily comprehensible to a reasonable person.

That said, we’ve found in practice that it’s difficult to appreciate the implications of digital policies on users until real user interfaces are presented to end users and then tested for their usability and understanding. Hence, we’ve put together a set of sample interfaces (see Appendix) that incorporate many of the proposed bill’s provisions and our recommendations. That said, much more work is needed before we can confidently assert that most Indians understand these interfaces and what they are truly consenting to share.

The concepts behind this bill are complicated and yet important. Most people do not understand concepts such as “revocable data access rights” and other rather jargon-filled phrases often present in the discussion of data privacy rights. Hence, we believe the best practices from interface design must be employed to help all Indians – even those who are illiterate and may only speak one of our many non-dominant languages – understand how to control their data.

For example, multi-language interfaces with audio assistance and help videos could be created to aid understanding and create informed consent.  Toll-free voice hotlines could be available for users to ask questions. Importantly, we recognize that the interfaces of informed consent and privacy control need rigorous study and will need to evolve in the years ahead.

In particular, we recommend user interface research in the following areas:

  • Interfaces for low-education and traditionally marginalized communities
  • Voice-only and augmented interfaces
  • Smart and “candy-bar” phone interfaces
  • Both self-serving and assisted interfaces (such that a user can consensually and legally delegate consent, as tax-payers do to accountants).

After user interface research has been completed and one can confidently assert that certain interface patterns can be understood by most Indian adults, we can imagine that templated designs representing best practices are recommended for the industry, much like the design guidelines for credit card products published by US Consumer Financial Protection Bureau or nutritional labelling.

3. Data Fiduciary Trust Scores Similar to App Store Ratings

We support the government’s effort to improve the trust environment and believe users should have appropriate, easy and fast ways to give informed consent & ensure bad actors can’t do well. Conversely, we believe that the best actors should benefit from a seamless UI and rise to the top.

The courts and data auditors can’t be the only way to highlight good, mediocre and bad players. From experience, we know that there will be a continuum of good to bad experiences provided by data fiduciaries, with only the worst and often most egregious actions being illegal.

People should be able to see the experiences of other users – both good and bad – to make more meaningful and informed choices. For example, a lender that also cross-sells other products to loan recipients and shares their mobile numbers may not be engaging in an illegal activity but users may find it simply annoying.

Hence, we recommend that data fiduciary trust scores are informed with user-created negatives reviews (aka complaints) and positive reviews.

In addition to Data Auditors (as the Bill envisions), user created, public ratings will create additional data points and business incentives for data fiduciaries to remain in full compliance with this law, without a company’s data protection assessment being the sole domain of its paid data auditors.

We would note that crowd sourced rating systems are an ever-evolving tech problem in their own right (and subject to gaming, spam, etc) and hence, trust rating and score maintenance may be best provided by multiple market actors and tech platforms.

4. Comments & Complaints on Data Fiduciaries are Public, Aggregatable Data

…so 3rd party actors and civil society can act on behalf of users.

A privacy framework will not change the power dynamics of our society overnight. Desperate people in need of money will often sign over almost anything, especially abstract rights. Additionally, individual citizens will rarely to be able to see larger patterns in the behaviour of lenders or other data fiduciaries and are ill-equipped to fight for small rewards on behalf of their community.  Hence, we believe that user ratings and complaint data about data fiduciaries must be made available in machine-readable forms to not only to the State but to third-parties, civic society and researchers so that they may identify patterns of good and bad behaviour, acting as additional data rights watchdogs on behalf all of us.

5. Warn of Potential Credit and Reputation Hazards

We are concerned about the rise of digital and mobile loans in other countries in recent years. Kenya – a country with high mobile payment penetration and hence like India one that has become data rich before becoming economically rich – has seen more than 10% of the adult population on credit blacklists in 2017; three percent of all digital loans were reportedly used for gambling. These new loan products were largely made possible by digital money systems and the ability of lenders to create automated risk profiles based on personal data; they clearly have the potential to cause societal harm and must be considered carefully.

Potential remedies to widespread and multiple loans are being proposed (e.g. real-time credit reporting services), but the fact that a user’s reputation and credit score will be affected by an action (such as taking out a loan), most also be known and understood by users. E.g. Users need to know that an offered loan will be reported to other banks and if they don’t pay they will be reported and unable to get other loans.

Furthermore, shared usage-based patterns – such as whether a customer pays their bills on time or buys certain types of products – must be available for review by end users.

6. A Right to View and Edit Inferred Personal Data

The Machine Learning and AI community have made incredible strides in computers’ ability to predict or infer almost anything. For example, in 2017, a babajob.com researcher showed the company could predict whether a job seeker earned more or less than Rs 12000 / month with more than 80% accuracy, using just their photo.  She did this using 3000 job seeker photos, 10 lines of code and Google’s TensorFlow for Poets sample code.  Note the project was never deployed or made publicly available.

As these techniques become ever more commonplace in the years to come, it’s reasonable to assume that public facing camera and sensor systems will be able to accurately infer most of the personal data of their subjects – e.g. their gender, emotional state, health, caste, religion, income – and then connect this data to other personally identifiable data such as a photo of their credit card and purchase history. Doing so will improve training data so that systems become even more accurate. In time, these systems – especially ones with large databases of labelled photos – like the governments’, popular social networks’ or a mall’s point of sale + video surveillance system – truly will be able to precisely identify individuals and their most marketable traits from any video feed.

Europe’s GDPR has enshrined the right for people to view data inferred about them, but in conjunction with the idea of a third party consent dashboard or Account Aggregator (in the RBI’s case), we believe we can do better.

In particular, any entity that collects or infers data about an individual that’s associated with an identifier such as an email address, mobile, credit card, or Aadhaar number should make that data viewable and editable to end users via their consent dashboard.  For example, if a payment gateway provider analyses your purchase history and infers you are diabetic and sells this information as a categorization parameter to medical advertisers, that payment gateway must notify you that it believes you are diabetic and enable you to view and remove this data. Google, for example, lists these inferences as Interests and allows users to edit them:

Using the Consent Dashboard mentioned in Major Comment 1, we believe users should have one place where they can discover, view and correct all personal and inferred data relevant to them.

Finally, more clarity is needed regarding how data gathered or inferred from secondary sources should be regulated and what consent may be required. For example, many mobile apps ask for a user’s consent to read their SMS Inbox and then read their bank confirmation SMSs to create a credit score. From our view, the inferred credit score should be viewable by the end user before it’s shared, given its personal data that deeply affects the user’s ability to gain usage of a service (in this case, often a loan at a given interest rate).

7. Sharing and Processing of Health Data

The Bill requires capturing the purpose for data sharing:

Chapter II, point 5:

“Purpose limitation.— (1) Personal data shall be processed only for purposes that are clear, specific and lawful. (2) Personal data shall be processed only for purposes specified or for any other incidental purpose that the data principal would reasonably expect the personal data to be used for, having regard to the specified purposes, and the context and circumstances in which the personal data was collected.”

In the healthcare domain, collecting the purpose for which the data is being shared might itself be quite revealing. For example, if data is being shared for a potential cancer biopsy or HIV testing, the purpose might be enough to make inferences and private determinations about the patient and say deny insurance coverage. On the other hand, stating high-level, blanket purposes might not be enough for future audits. A regulation must be in place to ensure the confidentiality of the stated purpose.  

The Bill has a provision for processing sensitive personal data for prompt action:

Chapter IV, point 21:

“Processing of certain categories of sensitive personal data for prompt action. — Passwords, financial data, health data, official identifiers, genetic data, and biometric data may be processed where such processing is strictly necessary— (a) to respond to any medical emergency involving a threat to the life or a severe threat to the health of the data principal; (b) to undertake any measure to provide medical treatment or health services to any individual during an epidemic, outbreak of disease or any other threat to public health; or (c) to undertake any measure to ensure safety of, or provide assistance or services to, any individual during any disaster or any breakdown of public order.”

While this is indeed a necessity, we believe that a middle ground could be achieved by providing an option for users to appoint consent nominees, in a similar manner to granting power of attorney. In cases of emergency, consent nominees such as family members could grant consent on behalf of the user. Processing without consent could happen only in cases where a consent nominee is unavailable or has not been appointed. This creates an additional layer of protection against misuse of health data of the user.

Suggestions and Questions

Fund Data Rights Education

We believe a larger, public education program may be necessary to educate the public on their data rights.

Limit Impact Assessment Requirement

Section 33 – Data Protection Impact Assessment —

  • Where the data fiduciary intends to undertake any processing involving new technologies or large scale profiling or use of sensitive personal data such as genetic data or biometric data, or any other processing which carries a risk of significant harm to data principals, such processing shall not be commenced unless the data fiduciary has undertaken a data protection impact assessment in accordance with the provisions of this section. …
  • On receipt of the assessment, if the Authority has reason to believe that the processing is likely to cause harm to the data principals, the Authority may direct the data fiduciary to cease such processing or direct that such processing shall be subject to such conditions as may be issued by the Authority.

We believe that the public must be protected from egregious data profiling but this provision does not strike an appropriate balance with respect to innovation. It mandates that companies and other researchers must ask government permission to innovate around large scale data processing before any work, public deployments or evidence of harm takes place. We believe this provision will be a large hinderance to experimentation and cause significant AI research to simply leave India. A more appropriate balance might be to ask data fiduciaries to privately create such an impact assessment but only submit to the Authority for approval once small scale testing has been completed (with potential harms better understood) and large scale deployments are imminent.

Passwords should be treated differently than other sensitive personal data.

Chapter IV – Section 18. Sensitive Personal Data. Passwords are different than other types of Sensitive Personal Data, given that they are a data security artifact, rather than a piece of data that is pertinent to a person’s being. We believe that data protection should be over-ridden in extraordinary circumstances without forcing companies to provide a backdoor to reveal passwords. We fully acknowledge that it is useful and sometimes necessary to provide backdoors to personal data – e.g. one’s medical history in the event of a medical emergency – but to require such a backdoor for passwords would likely introduce large potential security breaches throughout the entire personal data ecosystem.  

Does the Bill intend to ban automatic person-tagging in photos and image search of people?

Chapter I.3.8 – Biometric Data – The Bill defines Biometric Data to be:

“facial images, fingerprints, iris scans, or any other similar personal data resulting from measurements or technical processing operations carried out on physical, physiological, or behavioural characteristics of a data principal, which allow or confirm the unique identification of that natural person;”

The Bill includes Biometric Data in its definition of Sensitive Personal Data (section 3.35) which may only be processed with explicit consent:

Section 18. Processing of sensitive personal data based on explicit consent. — (1) Sensitive personal data may be processed on the basis of explicit consent

From our reading, we can see a variety of features available today around image search and person tagging being disallowed based on these provisions. E.g. Google’s image search contains many facial images which have been processed to enable identification of natural persons. Facebook’s “friend auto-suggestion” feature on photos employs similar techniques. Does the Bill intend for these features and others like them to be banned in India? It can certainly be argued that non-public people have a right to explicitly consent before they are publicly identified in a photo but we feel the Bill’s authors should clarify this position. Furthermore, does the purpose of unique identification processing matter with respect to its legality?  For example, we can imagine mobile phone-based, machine learning algorithms automatically identifying a user’s friends to make a photo easier to share with those friends; would such an algorithm require explicit consent from those friends before it may suggest them to the user?

Notifications about updates to personal data should be handled by a Consent Dashboard, not every data fiduciary.

Chapter IV – Section 25.4 – Right to correction, etc

Where the data fiduciary corrects, completes, or updates personal data in accordance with sub-section (1), the data fiduciary shall also take reasonable steps to notify all relevant entities or individuals to whom such personal data may have been disclosed regarding the relevant correction, completion or updating, particularly where such action would have an impact on the rights and interests of the data principal or on decisions made regarding them.

We believe the mandate on a data fiduciary to notify all relevant entities of a personal data change is too great a burden and is better performed by a consent dashboard, who maintains which other entities have a valid, up-to-date consent request to a user’s data. Hence, upon a data change, the data fiduciary would update the consent dashboard of the change and then the consent dashboard would then notify all other relevant entities.

It may be useful to keep the user in this loop – so that this sharing is done with their knowledge and approval.

Need for an Authority appeal process when data principal rights conflict

Section 28.5 – General conditions for the exercise of rights in this Chapter. —  

The data fiduciary is not obliged to comply with any request made under this Chapter where such compliance would harm the rights of any other data principal under this Act.

This portion of the law enables a data fiduciary to deny a user’s data change request if it believes doing so would harm another data principal. We believe it should not be up to the sole discretion of the data fiduciary to determine which data principal rights are more important and hence would like to see an appeal process to the Data Protection Authority made available if a request is refused for this reason.

Do not outlaw private fraud detection

Section 43.1 Prevention, detection, investigation and prosecution of contraventions of law

(1) Processing of personal data in the interests of prevention, detection, investigation and prosecution of any offence or any other contravention of law shall not be permitted unless it is authorised by a law made by Parliament and State Legislature and is necessary for, and proportionate to, such interests being achieved.

We worry the above clause would effectively outlaw fraud detection research, development and services by private companies in India. For instance, if a payment processor wishes to implement a fraud detection mechanism, they should be able to do so, without leaving that task to the State.  These innovations have a long track record of protecting users and businesses and reducing transaction costs. We recommend a clarification of this section and/or its restrictions to be applied to the State.

Limit record keeping use and disclosure to the Authority and the company itself.

Section 34.1.a. Record – Keeping –

The data fiduciary shall maintain accurate and up-to-date records of the following

(a) important operations in the data life-cycle including collection, transfers, and erasure of personal data to demonstrate compliance as required under section 11;

We expect sensitive meta-data and identifiers will need to be maintained for the purposes of Record Keeping; we suggest that this Record Keeping information be allowed but its sharing limited only to this use and shared only with the company, its Record Keeping contractors (if any) and the Authority.

Fillings may be performed digitally

Section 27.4 – Right to be Forgotten

The right under sub-section (1) shall be exercised by filing an application in such form and manner as may be prescribed.

The Bill contains many references to filing an application;  we’d suggest a definition that is broad and includes digital filings.

This also applies to sections which include “in writing” – which must include digital communications which can be stored (for instance, email).

Request for Definition Clarifications

What is “publicly available personal data”?

  • Section 17.2.g – We believe greater clarity is needed around the term “publicly available personal data.“ There questionably obtained databases for sale that list the mobile numbers and addresses of millions of Indians – would there thus be included as a publicly available personal data?
  • We’d recommend that DPA defines rules around what is publicly available personal data so that it is taken out of the ambit of the bill.  
  • The same can be said for data where there is no reasonable expectation of privacy (with the exception that systematic data collection on one subject cannot be considered to be such a situation)

Clarity of “Privacy by Design”

Section 29 – Privacy by Design

Privacy by Design is an established set of principles (see here and in GDPR) and we would like to see the Bill reference those patterns explicitly or use a different name if it wishes to employ another definition.

Define “prevent continuing disclosure”

Section 27.1 – Right to be Forgotten

The data principal shall have the right to restrict or prevent continuing disclosure of personal data by a data fiduciary…

We request further clarification on the meaning of  “prevent continuing disclosure” and an example use case of harm.

Define “standard contractual clauses” for Cross-Border Transfers

Section 41.3.5 – Conditions for Cross-Border Transfer of Personal Data

(5) The Authority may only approve standard contractual clauses or intra-group schemes under clause (a) of sub-section (1) where such clauses or schemes effectively protect the rights of data principals under this Act, including in relation with further transfers from the transferees of personal data under this subsection to any other person or entity.

We would like to standard contractual clauses clearly defined.

Define “trade secret”

Section 26.2 C – Right to be Forgotten

compliance with the request in sub-section (1) would reveal a trade secret of any data fiduciary or would not be technically feasible.

We request further clarification on the meaning of  “trade secret” and an example of the same.

Author Comments

Compiled by iSPIRT Volunteers:

Links

Comments and feedback are appreciated. Please mail us at [email protected].

Appendix – Sample User Interface Screens

Link: https://docs.google.com/presentation/d/1Eyszb3Xyy5deaaKf-jjnu0ahbNDxl7HOicImNVjSpFY/edit?usp=sharing

******

How To Empower 1.3 Billion Citizens With Their Data

2018 has been a significant year in our relationship with Data. Globally, the Cambridge Analytica incident made people realise that democracy itself can be vulnerable to data.  Closer to home, we got a first glimpse at the draft bill for Privacy by the Justice Sri Krishna Committee.

The writing on the wall is obvious. We cannot continue the way we have. This is a problem at every level – Individuals need to be more careful with whom they share their data and data controllers need to show more transparency and responsibility in handling user data. But one cannot expect that we will just organically shift to a more responsible, transparent, privacy-protecting regime without the intervention of the state. The draft bill, if it becomes law, will be a great win as it finally prescribes meaningful penalties for transgressions by controllers.

But we must not forget that the flip side of the coin is that data can also help empower people. India has much more socio-economic diversity than other countries where a data protection law has been enacted. Our concerns are more than just limiting the exploitation of user data by data controllers. We must look at data as an opportunity and ask how can we help users generate wealth out of their own data. Thus we propose, that we should design an India-specific Data Protection & Empowerment Architecture (DEPA). Empowerment & Protection are neither opposite nor orthogonal but co-dependent activities. We must think of them together else we will miss the forest for the trees.

In my talk linked below which took place at IDFC Dialogues Goa, I expand more on these ideas. I also talk about the exciting new technology tools that actually help us realise a future where Data can empower.

I hope you take away something of value from the talk. The larger message though, is that it is still early days for the internet. We can participate in shaping its culture, maybe even lead the way, instead of being passive observers. The Indian approach is finding deep resonance globally, and many countries, developing as well as developed, are looking to us for inspiration on how to deal with their own data problem. But it is going to take a lot more collaboration and co-creation before we get there. I hope you will join us on this mission to create a Data Democracy.

Data Privacy and Empowerment in Healthcare

Technology has been a boon to healthcare. Minimally-invasive procedures have significantly increased safety and recovery time of surgeries. Global collaboration between doctors has improved diagnosis and treatment. Rise in awareness of patients has increased the demand for good quality healthcare services. These improvements, coupled with the growing penetration of IT infrastructure, are generating huge volumes of digital health data in the country.

However, healthcare in India is diverse and fragmented. During an entire life cycle, an individual is served by numerous healthcare providers, of different sizes, geographies, and constitutions. The IT systems of different providers are often developed independently of each other, without adherence to common standards. This fragmentation has the undesirable consequence of the systems communicating poorly, fostering redundant data collection across systems, inadequate patient identification, and, in many cases, privacy violations.

We believe that this can be addressed through two major steps. Firstly, open standards have to be established for health data collection, storage, sharing and aggregation in a safe and standardised manner to keep the privacy of patients intact. Secondly, patients should be given complete control over their data. This places them at the centre of their healthcare and empowers them to use their data for value-based services of their choice. As the next wave of services is built atop digital health data, data protection and empowerment will be key to transforming healthcare.

Numerous primary health care services are already shifting to smartphones and other electronic devices. There are apps and websites for diagnosing various common illnesses. This not only increases coverage but also takes the burden away from existing infrastructures which can then cater to secondary and tertiary services. Data shared from devices that track steps, measure heartbeats, count calories or analyse sleeping patterns can be used to monitor behavioural and lifestyle changes – a key enabler for digital therapeutic services. Moreover, this data can not only be used for monitoring but also for predicting the onset of diseases! For example, an irregular heartbeat pattern can be flagged by such a device, prompting immediate corrective measures. Thus, we see that as more and more people generate digital health data, control it and utilise it for their own care, we will gradually transition to a better, broader and preventive healthcare delivery system.

In this context, we welcome the proposed DISHA Act that seeks to Protect and Empower individuals in regards to their electronic health data. We have provided our feedback on the DISHA Act and have also proposed technological approaches in our response. This blog post lays out a broad overview of our response.

As our previous blog post articulates the principles underlying our Data Empowerment and Protection Architecture, we have framed our response keeping these core principles in mind. We believe that individuals should have complete control of their data and should be able to use it for their empowerment. This requires laying out clear definitions for use of data, strict laws to ensure accountability and agile regulators; thus, enabling a framework that addresses privacy, security and confidentiality while simultaneously improving transparency and interoperability.

While the proposed DISHA Act aligns broadly with our core principles, we have offered recommendations to expand certain aspects of the proposal. These include a comprehensive definition of consent (open standards, revocable, granular, auditable, notifiable, secure), distinction between different forms of health data (anonymization, deidentification, pseudonymous), commercial use of data (allowed for benefit but restricted for harm) and types and penalties in cases of breach (evaluation based on extent of compliance).

Additionally, we have outlined the technological aspects for implementation of the Act. We have used learnings from the Digital Locker Framework and Electronic Consent Framework (adopted by RBI’s Account Aggregator), previously published by MeitY. This involves the role of Data Fiduciaries – entities that not only manage consent but also ensure that it aligns with the interests of the user (and not with those of the data consumer or data provider). Data Fiduciaries only act as messengers of encrypted data without having access to the data – thus their prime task remains managing the Electronic Data Consent. Furthermore, we have highlighted the need to use open and set standards for accessing and maintaining health records (open APIs), consented sharing (consent framework) and maintaining accountability and traceability through digitally verified documents. We have also underscored the need for standardisation of data through health data dictionaries, which will open up the data for further use cases. Lastly, we have alluded to the need to create aggregated anonymised datasets to enable advanced analytics which would drive data-driven policy making.

We look forward to the announcement and implementation of the DISHA Act. As we move towards a future with an exponential rise in digital health data, it is critical that we build the right set of protections and empowerments for users, thus enabling them to become engaged participants and better managers of their health care.

We have submitted our response. You can find the detailed document of our response to DISHA Act below

Policy Hacks Session on GDPR & DEPA

Here are concerns and curiosity about European Union General Data Protection Regime (GDPR) and there is a related issue in India being covered under Data Empowerment and Protection Architecture (DEPA) layer of India Stack being vigorously followed at iSPIRT.

iSPIRT organised a Policy Hacks session on these issues with Supratim Chakraborty (Data Privacy and Protection expert from Khaitan & Co.), Sanjay Khan Nagra (Core Volunteer at iSPIRT and M&A / corporate expert from Khaitan & Co) and Siddharth Shetty (Leading the DEPA initiative at iSPIRT).

Sanjay Khan interacted with both Siddharth and Supratim posing questions on behalf of Industry.

A video of the discussion is posted here below. Also, the main text of discussion is given below. We recommend to watch and listen to the video.

GDPR essentially is a regulation in EU law on data protection and privacy for all individuals within the European Union. It also addresses the export of personal data outside the EU.

Since it affects all companies having any business to consumer/people/individual interface in European Union, it will be important to understand this legal framework that sets guidelines for the collection and processing of personal information of individuals within the European Union (EU).

Supratim mentioned in the talk that GDPR is mentioned on following main principles.

  1. Harmonize law across EU
  2. Keep pace with technological changes happening
  3. Free flow of information across EU territory
  4. To give back control to Individual about their personal data

Siddharth explained DEPA initiative of iSPIRT. He mentioned that Data Protection is as important as Data empowerment. What this means is that individual has the ability to share personal data based on one’s choice to have access to services, such as financial services, healthcare etc. DEPA deal with consent layer of India Stack.

This will help service providers like account aggregators in building a digital economy with sufficient control of privacy concerns of the data. DEPA essentially is about building systems so that individual or consumer level individual is able to share data in a protected manner with service provider for specified use, specified time etc. In a sense, it addresses the concern of privacy with the use of a technology architecture.

DEPA is being pursued India and has nothing to do with EU or other countries at present.

For more details on DEPA please use this link here http://indiastack.org/depa/

Sanjay Khan poses a relevant question if GDPR is applicable even on merely having a website that is accessible of usable from EU?

Supratim explains, GDPR applicable, if there is involvement of personal data of the Data subjects in EU. Primarily GDPR gets triggered in three cases

  1. You have an entity in EU,
  2. You are providing Goods and services to EU data subjects whether paid for or not and
  3. If you are tracking EU data subjects.

Many people come in the third category. The third category will especially apply to those websites where it is proved that EU is a target territory e.g. websites in one of the European languages, payment gateway integration to enable payments in EU currency etc.

What should one do?

Supratim, further explains that the important and toughest task is data management with respect to personal data. How it came? where all it is lying? where is it going? who can access? Once you understand this map, then it is easier to handle. For example, a mailing list may be built up based on business cards that one may have been collected in business conferences, but no one keeps a track of these sources of collections. By not being able to segregate data, one misses the opportunity of sending even legitimate mailers.

Is a data subject receives and gets annoyed with an obnoxious email in a ‘subject’ that has nothing do with the data subject, the sender of email may enter into the real problem.

Siddharth mentioned that some companies are providing product and services in EU through a local entity are shutting shops.

Supratim, mentions that taking a proper explicit and informed consent in case of email as mentioned GDPR is a much better way to handle. He emphasised the earlier point of Data mapping mentioned above, on a question by Sanjay khan. Data mapping, one has to define GDPR compliant policies.

EU data subjects have several rights, edit date, port data, erase data, restrict data etc. GDRP has to be practised with actually having these rights enabled and policies and processed rolled out around them. There is no one template of the GDPR compliant policies.

Data governance will become extremely important in GDPR context, added Siddharth. Supratim added that having a Data Protection officer or an EU representative may be required as we go along in future based upon the complexity of data and business needs.

Can it be enforced on companies sitting in India? In absence of treaties, it may not be directly enforceable on Indian companies.  However, for companies having EU linkages, it may be a top-down effect if the controller of a company is sitting there.

Sanjay asked, how about companies having US presence and doing business in EU. Supratim’s answer was yes these are the companies sitting on the fence.

How about B2B interactions? Will official emails also be treated as personal? Supratim answers yes it may. Again it has to be backed by avenues where data was collected and legitimate use. Supratim further mentions that several aspects of the law are still evolving and idea at present is to take a conservative view.

Right now it is important to start the journey of complying with GDPR, and follow the earlier raised points of data mapping, start defining policy and processes and evolve. In due course, there will be more clarity. And if you are starting a journey to comply with GDPR, you will further be ready to comply with Indian privacy law and other global legal frameworks.

“There is no denying the fact that one should start working on GDPR”, said Sanjay. “Sooner the better”, added Supratim.

We will be covering more issues on Data Protection and Privacy law in near future.

Author note and Disclaimer: PolicyHacks, and publications thereunder, are intended to provide a very basic understanding of legal/policy issues that impact Software Product Industry and the startups in the eco-system. PolicyHacks, therefore, do not necessarily set out views of subject matter experts, and should under no circumstances be substituted for legal advice, which, of course, requires a detailed analysis of the relevant fact situation and applicable laws by experts in the subject matter on the case to case basis.

Understanding iSPIRT’s Entrepreneur Connect

There is confusion about how iSPIRT engages with entrepreneurs. This post explains to our engagement model so that the expectations are clear. iSPIRT’s mission is to make India into a Product Nation. iSPIRT believes that startups are a critical catalyst in this mission. In-line with the mission, we help entrepreneurs navigate market and mindset shifts so that some of them can become trailblazers and category leaders.

Market Shifts

Some years back global mid-market business applications, delivered as SaaS, had to deal with the ubiquity of mobile. This shift upended the SaaS industry. Now, another such market shift is underway in global SaaS – with AI/ML being one factor in this evolution.

Similar shifts are happening in the India market too. UPI is shaking up the old payments market. JIO’s cheap bandwidth is shifting the digital entertainment landscape. And, India Stack is opening up Bharat (India-2) to digital financial products.

At iSPIRT, we try to help market players navigate these shifts through Bootcamps, Teardowns, Roundtables, and Cohorts (BTRC).

We know that reading market shifts isn’t easy. Like stock market bubbles, market shifts are fully clear only in hindsight. In the middle, there is an open question whether this is a valid market shift or not (similar to whether the stock market is in a bubble or not). There are strong opinions on both sides till the singularity moment happens. The singularity moment is usually someone going bust by failing to see the shift (e.g. Chillr going bust due to UPI) or becoming a trailblazer by leveraging the shift (e.g. PhonePe’s meteoric rise).

Startups are made or unmade on their bets on market shifts. Bill Gates’ epiphany that browser was a big market shift saved Microsoft. Netflix is what it is today on account of its proactive shift from ground to cloud. Closer home, Zoho has constantly reinvented itself.

Founders have a responsibility to catch the shifts. At iSPIRT, we have a strong opinion on some market shifts and work with the founders who embrace these shifts.

Creating Trailblazers through Winning Implementations

We are now tieing our BTRC work to specific market-shifts and mindset-shifts. We will only work with those startups that have a conviction about these market/mindset-shifts (i.e., they are not on the fence), are hungry (and are willing to exploit the shift to get ahead) and can apply what they have learned from iSPIRT Mavens to make better products.

Another change is that we will work with young or old, big or small startups. In the past, we worked with only startups in the “happy-confused” stage.

We are making these changes to improve outcomes. Over the last four years, our BTRC engagements have generated very high NPS (Net Promoter Scores) but many of our startups continue to struggle with their growth ceilings, be it an ARR threshold of $1M, $5M, $10M… or whether it is a scalable yet repeatable product-market fit.

What hasn’t changed is our bias for working with a few startups instead of many. Right from the beginning, iSPIRT’s Playbooks Pillar has been about making a deep impact on a few startups rather than a shallow impact on many. For instance, our first PNGrowth had 186 startups. They had been selected from 600+ that applied. In the end, we concluded that we needed even better curation. So, our PNGrowth#2 had only 50 startups.

The other thing that hasn’t changed is we remain blind to whether the startup is VC funded or bootstrapped. All we are looking for are startups that have the conviction about the market/mindset-shift, the hunger to make a difference and the inner capacity to apply what you learn. We want them to be trailblazers in the ecosystem.

Supported Market/Mindset Shifts

Presently we support 10 market/mindset-shifts. These are:

  1. AI/ML Shift in SaaS – Adapt AI into your SaaS products and business models to create meaningful differentiation and compete on a global level playing field.

  2. Shift to Platform Products – Develop and leverage internal platforms to power a product bouquet. Building enterprise-grade products on a common base at fractional cost allows for a defensible strategy against market shifts or expanding market segments.

  3. Engaging Potential Strategic Partners (PSP) – PSPs are critical for scale and pitching to them is very different from pitching to customers and investors. Additionally, PSPs also offer an opportunity to co-create a growth path to future products & investments.

  4. Flow-based lending – Going after the untapped “largest lending opportunity in the world”.

  5. Bill payments – What credit and corporate cards were to West, bill payments will be to India due to Bharat Bill Pay System (BBPS).

  6. UPI 2.0 – Mass-market payments and new-age collections.

  7. Mutual Fund democratization – Build products and platforms that bring informal savings into the formal sector.

  8. From License Raj to Permissions Artefact for Drones – Platform approach to provisioning airspace from the government.

  9. Microinsurance for Bharat – Build products and platforms that reimagine Agri insurance on the back of India Stack and upcoming Digital Sky drone policy.

  10. Data Empowerment and Protection Architecture (DEPA) – with usage in financial, healthcare and telecom sectors.

This is a fluid list. There will be additions and deletions over time.

Keep in mind that we are trying to replicate for all these market/mindset-shifts what we managed to do for Desk Marketing and Selling (DMS). We focussed on DMS in early 2014 thanks to Mavens like Suresh Sambandam (KissFlow), Girish Mathrubootham (Freshworks), and Krish Subramaniam (Chargebee). Now DMS has gone mainstream and many sources of help are available to the founders.

Seeking Wave#2 Partners

The DMS success has been important for iSPIRT. It has given us the confidence that our BTRC work can meaningfully help startups navigate the market/mindset-shifts. We have also learned that the market/mindset-shift happens in two waves. Wave#1 touches a few early adopters. If one or more of them create winning implementations to become trailblazers, then the rest of the ecosystem jumps in. This is Wave#2. Majority of our startups embrace the market-shift in Wave#2.

iSPIRT’s model is geared to help only Wave#1 players. We falter when it comes to supporting Wave#2 folks. Our volunteer model works best with cutting-edge stuff and small cohorts.

Accelerators and commercial players are better positioned to serve the hundreds of startups embracing the market/mindset-shift in Wave#2. Together, Wave#1 and Wave#2, can produce great outcomes like the thriving AI ecosystem in Toronto.

To ensure that Wave#2 goes well, we have decided to include potential Wave#2 helpers (e.g., Accelerators, VCs, boutique advisory firms and other ecosystem builders) in our Wave#1 work (on a, needless to say, free basis). Some of these BTRC Scale Partners have been identified. If you see yourself as a Wave#2 helper who would like to get involved in our Wave#1 work, please reach out to us.

Best Adopters

As many of you know, iSPIRT isn’t an accelerator (like TLabs), a community (like Headstart), a coworking space (like THub) or a trade body. We are a think-and-do-tank that builds playbooks, societal platforms, policies, and markets. Market players like startups use these public goods to offer best solutions to the market.

If we are missing out on helping you, please let us know by filling out this form. You can also reach out to one of our volunteers here:

Chintan Mehta: AI shift in SaaS, Shift to Platform Products, Engaging PSPs

Praveen Hari: Flow-based lending

Jaishankar AL: Bill payments

Tanuj Bhojwani: Permissions Artefact for Drones

Nikhil Kumar: UPI2.0, MF democratization, Microinsurance for Bharat

Siddharth Shetty: Data Empowerment and Protection Architecture (DEPA)

Meghana Reddyreddy: Wave#2 Partners

We are always looking for high-quality volunteers. In case you’re interested in volunteering, please reach out to one of the existing volunteers or write to us at [email protected]