Thursday, December 31, 2009

The case for User-specified TOS and Privacy Policies

Every service provides its own Terms of Service and Privacy Policy, and the users are supposed to accept it to make use of the service.
But, I can see some reasons for the case for a User-specified Terms-Of-Service (UTOS) and User-specified Privacy Policy (UPP):
Every user indicates in some simple syntax a UTOS and UPP, and services needs to conform to them to be able to provide a service to the user. I think it is time to take away the privacy policy from the lawyers to the computer scientists.

Here are some components of a UPP:

InfoletActions[Predicates]
Biographical InformationRetainfor 30 days
Sharewith <n'th level of Social Graph, Other Services/Apps>
Service Usage InformationRetainfor 90 days
Sharewith <no one>
User-generated InformationRetainforever, unless explicitly deleted
Sharewith <Provide Settings to control>

If a service either violates the policy, or does not support some components of the policy, the user could easily decide to provide an exception and sign up or quit. Some services provide very good privacy policies in the beginning, and slowly start diluting them. It becomes impossible for humans to keep track of legalese english to see if something is amiss. This gets hard for the less popular sites which do not get media attention. The above policy specification if standardized could address user anxiety better I think.

Algorithmic Imperialism

Algorithms are cute answers to How? questions in life.
(Replace "cute" with efficient, smart, advanced, etc. in your context)


In short, algorithms are remarkably powerful and at the core of most useful things in life. If (absolute) power corrupts, one question is if algorithmic power can corrupt, and can lead to problems like Imperialism.
To make good algorithms, you often need 1) intelligent people, 2) access to existing algorithms and 3) some technology. Depending on your area of interest, the quality and quantity of the above will vary - for example, making an auto-flushing toilet differs from making a small flying vehicle to take humans to Mars.
So, if an organization can hoard lots of intelligent people, has plenty of access to existing art (often exclusively), and enough money to procure technology of choice, it can generate more algorithms and in the current patent regime, can even hoard them. The effects of positive feedback can be quite strong, until governments and regulations set in.

But, the risk I want to bring up is one of perception. Some companies may provide one or two good algorithms which capture mass appeal, and can ride the wave of future products with sub-optimal algorithms - For example, Cisco may not make the fastest most power-efficient routers for a particular customer's need, but a CIO will be hard-pressed to buy routers from a much smaller company, even if it were very stable now. Even though a new search engine may provide a more efficient solution for some customers' needs, how likely is that they will abandon their current favorite search engine?

There really is no algorithmic marketplace where the wisdom of crowds could be put to test. Most people rely on the media to winnow out the best choices for them. But, algorithmic brands could blind people and gently imperialize them.

Would like to see some data that shows why we should not worry about this.