Palestinians are being arrested by Israel for posting on Facebook

One of the more insidious aspects of Israel’s military dictatorship in the West Bank and East Jerusalem is its blanket monitoring of Palestinian social networks and other forms of communication via the internet. This often leads to arrests being made. A recent report by 7amleh, the Arab Centre for Social Media Advancement, names 21 Palestinians who have been imprisoned or detained by Israel for their posts on Facebook.

An ongoing narrative popular among Israeli propagandists in the past few years blames the nebulous concept of “incitement” for the phenomenon of Palestinians fighting back against Israel’s brutal occupation forces. A Mossad proxy organisation misleadingly known as the “Israel Law Centre” (aka Shurat HaDin) has even launched lawsuits against Facebook for supposedly facilitating terrorism. A US federal court threw the billion-dollar case out in May.

Last year, Israel’s anti-BDS (boycott, divestment and sanctions) minister Gilad Erdan claimed that Israeli blood was “on the hands of Facebook” and its CEO Mark Zuckerberg. Shurat HaDin even organised a campaign to raise money for a billboard that would have been erected outside Zuckerberg’s home.

Lees verder

Predictive Policing: “Falsches” Facebook-Posting führt in Israel oft zu Haft

Predictive Policing: “Falsches” Facebook-Posting führt in Israel oft zu Haft

Palästinensische Aktivisten haben rund 800 Fälle dokumentiert, in denen junge Leute in Israel wegen Facebook-Äußerungen festgenommen wurden. Auf der Konferenz von Netzpolitik.org ertönte der Ruf nach einer “Gemeinwohlförderung” von Algorithmen.

Marwa Fatafta vom Arab Center for Social Media Advancement 7amleh hat am Freitag auf der vierten Konferenz von Netzpolitik.org in Berlin ein düsteres Bild von “Predictive Policing” in Israel gezeichnet. Seit Oktober 2015 habe die palästinensische Organisation rund 800 Fälle dokumentiert, in denen junge Leute wegen Facebook-Postings verhaftet worden seien, erklärte die Aktivistin. Die Betroffenen verschwänden oft einfach einige Monate im Gefängnis, ohne dass ihnen ein ordentlicher Prozess gemacht werde.

Lees verder

Israel and Facebook team up to combat social media posts that incite violence

Israeli officials are drafting legislation to force social media networks to ‘rein in’ racially-charged content, raising legal and ethical issues

Israel and Facebook will begin working together to tackle posts on the social media platform that incite violence, a senior Israeli cabinet minister has said.

A spate of high-profile new attacks on Israelis in the past 12 months have been incited by inflammatory posts on Facebook, the government argues, which is why legislation to compel the company to delete posts that encourage violent behaviour is on the books.

Representatives from Facebook met with government ministers last week, including interior minister Gilad Erdan and justice minister Ayelet Shaked, who have repeatedly called on the company to do more to monitor and control content.

Lees verder

If you use Facebook messenger, here´s how you´re being recorded even when you’re not using your phone

We’re creating viewer supported news. Become a member! I’m in!
There are many ups and downs about improvements in technology. We have, undoubtedly, become more enamoured with its ability to make our lives easier, and more informed in ways we never thought possible. Everything has been digitized, and there are so many forms of communication, it’s no wonder the home telephone has collected dust.

People have come to see multi-tasking as a virtue, and so have required companies to make communication easier and faster. Facebook Messenger, for instance, has become a powerful tool for people to connect with each other. It was discovered back in April that 900 million people use Messenger every month, and millions of them are chatting with strangers, friends, family, and getting in touch with businesses. Essentially, information, valuable and personal, is being shared.

Lees verder

Met de billen bloot op Facebook

Er wordt alom voor gewaarschuwd: je sociale leven kan redelijk eenvoudig in kaart worden gebracht zodra je actief een Facebook-pagina onderhoudt. Buro J&J brengt het resultaat visueel in kaart.

Metadata zijn sporen die je na laat gedurende je communicatie via internet en telefoon. Dit zijn vooral individuele sporen, sporen over jezelf en het directe contact dat je onderhoudt met mensen. Natuurlijk kan naar aanleiding van die sporen een beeld geschetst worden van je sociale leefwereld, maar daarvoor moet je iemands data wel eerst voor langere tijd opslaan.

Individuele data is harde data over waar, hoe laat en met wie je hebt gesproken waarmee je als verdachte, getuige of onbekende kunt worden gelabeld door opsporingsdiensten. Je bent in de buurt geweest, je hebt met iemand gebeld of gewhatsappt, je reageert niet op een sms-bombardement, alles in het kader van de opsporing.

Metadataverzameling

Voor directe vervolging zijn die data van belang, voor inlichtingendiensten minder. Zij zullen ongetwijfeld veel data verzamelen (‘At a meeting with his British counterparts in 2008, Keith Alexander, then head of the National Security Agency, reportedly asked, “Why can’t we collect all the signals, all the time?”’, the Washington Post 13 mei 2014), maar dat ligt in de aard van inlichtingendiensten. Die data heeft ook pas inlichtingenwaarde op het moment dat je iemands digitale stappen langer volgt. Voor het voorkomen van aanslagen zijn die data meestal zinloos. Zij geven misschien wel patronen aan, maar hebben geen voorspellende waarde voor mogelijke acties.

Op 20 december 2013 opent NBC news met: ‘NSA program stopped no terror attacks, says White House panel member’ The Guardian (14-01-14) onderstreept deze claim met de stelling dat volgens de Senate judiciary committee het verzamelen van bulk telefoondata een beperkte rol heeft gespeeld in het voorkomen van terrorisme.

The Guardian baseert haar stelling op een onderzoek door de New America Foundation die tot de conclusie kwam dat de NSA geen enkele aanslag heeft weten te voorkomen. De aanslag op de marathon van Boston van 15 april 2013 onderschrijven die conclusie. Meerdere diensten (FBI, CIA en NSA) hielden de verdachten in de gaten, maar zij konden hun aanslagen toch uitvoeren.

Nederland laat hetzelfde beeld zien. De inlichtingendiensten hebben de moord op Fortuyn en Van Gogh niet weten te voorkomen. Van de Hofstadgroep, waar Mohammed Bouyeri toe behoorde, was reeds bekend dat de meeste leden benaderd waren door de AIVD. Tevens was bekend dat zij bijeen kwamen in het huis van Bouyeri en dat het adresboek van hem door de Amsterdamse politie was gekopieerd voor de inlichtingendienst. De dienst heeft aangegeven dat zij webfora kunnen hacken, daarnaast heeft de dienst naar alle waarschijnlijkheid telefoontaps en internettaps op de Hofstadgroep leden geplaatst. Theo van Gogh is ondanks die dataverzameling vermoord (zie Onder Druk, Buro Jansen & Janssen).

Open boek

Voor inlichtingendiensten is het voorkomen van aanslag echter niet het belangrijkste. Zij willen de ‘subversieven’, de ‘tegencultuur’ in kaart brengen. Specifieke informatie kan dan belangrijk zijn om op een groep in te kunnen zoomen, maar die data (metadata) is slechts deels interessant.

Je kunt bijvoorbeeld vastleggen dat Margriet en Barbara (de twee vrouwen in dit verhaal) veel met elkaar communiceren, bijvoorbeeld elke dag om 16.00 uur gedurende twee minuten. Margriet bevindt zich ten oosten van het centrum van Amsterdam, Barbara ten westen. Een locatie is er ook voor beide vrouwen. Laurence (de man in dit verhaal) communiceert slechts zelden met beide vrouwen, met Barbara op bepaalde momenten. Groepsgesprekken hebben de drie nooit.

Eigenlijk ben je nu al het zicht op de ‘bigger picture’ kwijt, omdat je direct inzoomt op het individu. Je verliest zo het helikopterperspectief, je staat niet meer boven de persoon, maar eigenlijk direct naast hem. Voor opsporingsdiensten is dit van belang voor het traceren van verdachten van misdrijven, je legt iemands leven in retrospectief vast. Voor inlichtingendiensten is deze data eigenlijk zinloos, je bent altijd te laat, zoals de moord op Van Gogh laat zien.

Dat tegenwoordig iedereen een spoor van data trekt, is sinds de onthullingen van Edward Snowden op de achtergrond geraakt. De overheid heeft een grote verzamelwoede en dat is de ‘schuldige’ in de discussie. Dit is al vele jaren bekend, maar veel mensen leken en lijken zich daar niets van aan te trekken. De discussie over de massa aan data die mensen zelf op het internet storten door eigen communicatie, is op de achtergrond geraakt.

Communicatie over mijn aanwezigheid in de Albert Heijn gebeurt zowel verhuld (metadata) als open (twitter, instagram). Het lijkt erbij te horen dat bij een actie, bijvoorbeeld het omtrekken van stellingkasten in de AH tegen de macht van de supermarkten, je niet alleen je metadata meesleept (zonder bellen al je locatie prijsgeven), maar ook nog eens met diezelfde telefoon twittert (hashtag #valAHaan) en fotografeert.

Dat de overheid die data verzamelt is dan eigenlijk bijzaak geworden. Veel mensen delen hun gehele leven elke seconde van de dag, niet alleen via metadata ook door ‘echte’ data. Het op straat gooien van persoonlijke data is ook zichtbaar bij Facebook. Wij hebben drie mensen (Margriet, Barbara en Laurence) die buitenparlementair politiek actief zijn, gevraagd of wij hun Facebook-data mochten analyseren. Die analyse hebben we vervolgens in een grafisch beeld omgezet.

Wie is wie

Stel, je bent vagelijk bevriend met Margriet. Je hebt nooit haar vrienden ontmoet en wordt uitgenodigd voor haar dertigste verjaardag. Margriet geeft een knalfuif en al haar familie, vrienden en bekenden komen langs. Zelfs incidentele vrienden van uitgaansgelegenheden zijn van de partij.

Zodra je van de grote zaal waar de party plaatsvindt een foto zou maken, krijg je grofweg een schets van de Facebook-pagina van Margriet. Haar familie zoekt elkaar op, mensen van de ngo waar zij actief voor is doen hetzelfde, krakers delen de laatste nieuwtjes uit, de deelnemers aan haar dansgroepen begroeten elkaar, en anderen hokken in groepjes. Door de gehele ruimte zwerven enkele eenlingen die niet echt mensen kennen.

In de loop van de avond vormt de dansvloer het middelpunt van het feest, ook daar is sprake van groepsvorming. Het eerste moment van de avond, dat bekenden elkaar opzoeken, zie je terug in de afbeelding op de Facebook-pagina van Margriet. Nu niet alleen op haar dertigste verjaardagsdag, maar dagelijks.

In het algemeen valt op dat de meeste mensen met hun volledige naam (voor- en achternaam) zich op Facebook presenteren. Ook ngo´s, actiegroepen, bands, kraakpanden en alternatieve uitgaansgelegenheden vermelden een volledig profiel op sociale media. Daardoor wordt vrij snel duidelijk waar iemand sympathie voor heeft, waar hij of zij uitgaat, welke kraakpanden de persoon kent en welke acties door iemand worden ondersteund.

Opvallend is dat al die acties en ngo’s ook bij elkaar horen. Er zitten geen vreemde zaken bij, zoals bijvoorbeeld bedrijven als Coca Cola, Monsanto of Shell. De drie activisten tonen hun politieke opvatting door middel van hun Facebook-pagina duidelijk. Natuurlijk zijn Margriet, Barbara en Laurence bekenden van Jansen & Janssen, maar zelfs bij onbekende activisten zal het eenvoudig zijn om de politieke kleur, vrienden, familie, werk en je sociale netwerk op te tekenen.

Een kraaksymbool op Facebook bijvoorbeeld is iets totaal anders dan een kraaksymbool op een T-shirt dat je vandaag draagt. Dat T-shirt draag je waarschijnlijk anoniem, zonder naambordje of rug-burgerservicenummer. Via je Facebook-pagina is dat T-shirt gekoppeld aan je naam, je sociale netwerk, je leefwereld. Dat is niet anoniem, het gaat zelfs verder dan je persoonlijke identificatie.

Niet perfect

Natuurlijk is het beeld niet perfect en moet er rekening worden gehouden met allerlei ongerijmdheden, maar de grafische representatie van de drie mensen schept een beeld van hun leven. Bij het zien van het beeld viel het Margriet op dat haar “sociale netwerk heel goed zichtbaar is”. Dat het beeld niet perfect is viel haar ook op: “Grappig dat er ook een muziekband en een persoon tussen zitten die inmiddels amper nog actief zijn.” Het beeld is niet perfect en daarom worden hier enkele kanttekeningen bij de interpretaties geplaatst.

Veel communiceren, veel ‘liken’ of veel ‘likes’ krijgen op Facebook betekent niet alles. In de beelden hebben de verschillende personen en groepen een andere kleuren. Donker rood is een teken van grote activiteit, maar in deze analyse wordt daar niet dieper op ingegaan. Veel activiteit valt bij een oppervlakkige analyse niet goed te definiëren. Wel maakt het beeld duidelijk dat met verfijnde oplossingen de mate van activiteit en welke activiteit nader kan worden ingekleurd. Automatisch moet bij de analyse van de beelden voorzichtig worden omgesprongen met conclusies over leiderschap en hiërarchie.

Ook zegt de communicatie op Facebook niet alles. Mensen die op internet zeer actief zijn, zijn misschien in werkelijkheid erg verlegen. Mensen met een grote mond en die stoer doen op internet, vervullen niet per se een belangrijke rol binnen een groep en/of sociaal netwerk. Tevens is niet iedereen aanwezig op Facebook. Er zijn nog steeds mensen die geen Facebook-pagina hebben, die zijn dus niet zichtbaar in het netwerk. Zelfs met deze kanttekeningen moet Laurence ook toegeven dat zijn leven redelijk goed in beeld wordt gebracht.

Facebook-beeld van Barbara
Pijl 1 oude vriendengroep en contacten via derden, geen sterke banden
Pijl 2 collega’s en ngo’s gerelateerd aan het werk van Barbara
Pijl 3 sterke relatie met Barbara, springt eruit in het netwerk, is haar vriend Evert
Pijl 4 collega’s en uitgaansgelegenheden gerelateerd aan Evert vriend van Barbara
Pijl 5 oude vriendengroep en contacten via familie zwerm, geen sterke banden
Tussen pijl 1 en 5 Familie zwerm
Tussen pijl 2 en 4 individuen, kraakpanden en alternatieve uitgaansgelegenheden.

Familie zwerm

Het Facebook beeld van Margriet bestaat uit een grote zwerm (pijl 1 van haar beeld), een klein wolkje onder (pijl 3), twee wolken bij elkaar (pijl 4) en enkele losse individuen bovenaan. Het beeld van Barbara laat links een kleine wolk zien met een plukje daaronder (pijl 1 en 5) en rechts een grote zwerm in een boog van boven naar beneden (pijl 2, 3 en 4). Laurence laat een grote wolk in het midden zien (pijl 2 en 4) en twee plukjes boven (pijl 1) en beneden (pijl 3).

Bij alle drie de beelden springen de plukjes familie er uit. Margriet heeft een klein wolkje familie (pijl 3) die ver onder haar ‘eigen’ wolk hangt. Barbara heeft een grote wolk links van de centrale zwerm. Dat is haar familie en waarschijnlijk oude vrienden in het land waar zij vandaan komt. Laurence heeft een klein wolkje onder de hoofdwolk, waar zijn familie en enkele oude vrienden zich hebben verenigd.

Alle drie de activisten hebben ‘afstand’ tot hun familie. Bij Barbara heeft het met fysieke afstand te maken, bij Laurence en Margriet kan het iets zeggen over de mate van gehechtheid. Bij Barbara is nog opvallen dat naast de ‘familie’ zwerm (tussen pijl 1 en pijl 5) er nog twee kleine wolkjes (pijl 1 en 5) verder van de centrale wolk afstaat. De verklaring van deze wolk: een groep vrienden of bekenden in het land waar Barbara vandaan komt. Pijl 1 laat een klein netwerk zien dat verbonden is met enkele wolken bij de centrale zwerm. Het is niet direct verbonden met de centrale persoon (pijl 3) in de grote wolk. Waarschijnlijk is dit een groep uit het verleden waar om verschillende redenen minder relaties mee onderhouden zijn.

Hoe zijn die specifieke familie-zwermen te onderscheiden van elkaar? Eigenlijk vrij simpel. De activisten staan op Facebook met hun achternaam. In de ‘familie’ wolken komt die naam veel voor. Natuurlijk is het een aanname, maar bij navraag blijkt Margriet aan te geven dat “rechtsonder de familie zit”. Ook Laurence zegt dat pijl 3 een “opvallend los netwerkje is. Dit is een netwerk van oude vrienden en familie. Ik heb daar weinig contact mee en dat lijkt zelfs uit het Facebook-beeld te halen”, geeft Laurence aan. In ieder geval is de familie zwerm niet opgenomen in de centrale wolk van alle drie de activisten.

Facebook-beeld van Laurence
Pijl 1: wolkje vrienden of collega’s die betrokken zijn bij een ngo’s
Pijl 2: activisten zwerm rond kraakpand in Amsterdam (groepen, individuen, kraakpanden etc.)
Pijl 3: Familie zwerm en wat oude vrienden
Pijl 4: activisten zwerm rond groep in Den Haag (groepen, individuen, kraakpanden etc.)

Activisten zwerm

De activisten zwerm bestaat bij alle drie uit een verzameling groepen, kraakpanden, alternatieve uitgaansgelegenheden en personen. Bij Margriet bestaat die wolk (pijl 1) uit verschillende delen. Links boven bevindt zich een groep mensen, muziekbands en groepen rond enkele uitgaansgelegenheden en kraakpanden zoals de OCCii aan de Amstelveenseweg in Amsterdam West en het kraakpand de Valreep in Amsterdam Oost.

Links onder in de centrale wolk komt het woord anarchisme regelmatig terug, zoals Anarchistische Groep Friesland en Anarchistisch Kollektief Utrecht. Deze groepen en mensen ‘hangen’ rond bij Doorbraak, een linkse basisorganisatie. Rechts onder bevinden zich individuen in de zwerm die actief zijn voor verschillende ngo’s. Gezien het werk van Margriet is het logisch dat zij verbinding met deze groep mensen heeft. Slechts enkele groepen komen er in voor, het zijn vooral personen.

Tot slot rechtsboven is een groep mensen en enkele bands gegroepeerd die tussen de alternatieve uitgaansgelegenheden en kraakpand (links boven) en de twee wolken rechts van de activisten zwerm staan. Rechtsboven vormt een soort brug naar twee dansgroepen die rechts van de centrale wolk staan.

De activistenzwerm van Laurence (pijl 2 en 4) is even scherp in te delen in aparte delen. Rechtsonder een specifiek groepje krakers/activisten die dezelfde hobby’s erop nahouden. Rechtsboven een groep mensen en organisaties georganiseerd rond het Autonoom Centrum Den Haag (pijl 4). Linksonder is weer het kraakpand de Valreep in Amsterdam Oost present, net als bij Margriet. Het is opvallend hoe de Valreep (pijl 2) bij Laurence als een soort spin vele connecties maakt met mensen en groepen.

Hier omheen hangen wat activisten en krakers uit Amsterdam. De centrale as in de activistische zwerm van Laurence wordt gevormd tussen de Valreep in Amsterdam en het Autonoom Centrum in Den Haag. Linksboven zijn een paar personen aanwezig die de link vormen tussen de Amsterdamse kraakscene en het wolkje dat links boven de activistenzwerm van Laurence zweeft. Dit kleine wolkje dat los is geweekt van de centrale wolk zijn mensen die betrokken zijn bij een ngo.

De as ‘de Valreep – Autonoom Centrum’ in de activistenzwerm van Laurence is niet direct zichtbaar bij Margriet. Daar spelen eerder enkele individuen een centrale rol in de ‘grote wolk’. Margriet: “in de grote wolk (pijl 1) springen er een aantal erg uit die blijkbaar zeer actief zijn op Facebook zoals Albert en Astrid.” Deze twee mensen waren vroeger actief (jaren ’90 en begin ‘0), maar spelen nu geen belangrijke rol meer binnen de activisten scene. Niet alleen groepen kunnen dus een verbindende factor spelen, maar ook individuen zoals de zwermen rond Albert en Astrid laten zien. Zij houden de wolk van Margriet bij elkaar.

Bij Barbara is iets vergelijkbaars aan de hand. De onderkant van de grote wolk (pijl 2, 3 en 4) wordt bij elkaar gehouden door één persoon (pijl 3). Dat is iemand waarmee Barbara veel connecties heeft, haar vriend Evert. Zelfs zonder kennis over de relatie tussen Barbara en Evert valt in ieder geval zijn zeer actieve positie binnen haar netwerk op. Het is duidelijk dat Evert heel dicht bij Barbara staat. Dit kan ook worden geconcludeerd uit het feit dat Evert veel contacten heeft met de familie zwerm (tussen pijl 1 en 5).

Rechts van Evert wordt het laagste deel van de wolk (pijl 4) ingenomen door collega´s en groepen rond de uitgaansgelegenheid waar hij werkt. In het midden zijn vooral de alternatieve uitgaansgelegenheden en kraakpanden vertegenwoordigd (rond pijl 3). Halverwege de top is er een lichte breuk zichtbaar in de wolk. De top lijkt daardoor een beetje los te staan van het deel er onder (pijl 2).

Die top wordt ingenomen door collega´s van de werkplek van Barbara, een ngo die enkele contacten onderhoudt met de activistische scene. Vanuit die top zijn er enkele individuen die tegen de grote wolk aanhangen in twee kleine plukken verdeeld en die ook contact onderhouden met de ‘overkant’, de familie en oude vrienden zwerm. Dat zijn mensen die connecties hebben met het land waar Barbara vandaan komt, iets dat is op te maken uit de namen van de verschillende mensen.

Facebook-beeld van Margriet
Pijl 1: activisten zwerm met groepen, individuen, kraakpanden en alternatieve uitgaansgelegenheden.
Pijl 2: Losse individuen, merendeel oud klasgenoten van de middelbare school
Pijl 3: Familie zwerm
Pijl 4: Vertier wolk: een grote (waar de pijl naar wijst) en kleine dansgroep (links van de grote wolk)

Vertier wolk

Bij Margriet is het opvallend dat zij twee aparte wolken ter rechterzijde van de activisten zwerm heeft (pijl 4 en iets naast pijl 4). Centraal in die twee wolken staan de namen van een dansgroep. De individuen daaromheen zullen naar alle waarschijnlijkheid leden zijn of sympathisanten, zoals bij de activisten zwerm. Barbara en Laurence geven niet zo heel duidelijk hun hobby’s prijs, maar het kan ook betekenen dat zij activiteiten doen met mensen die geen gebruik maken van Facebook of niet zichtbaar zijn in een duidelijk clubverband.

In principe is er natuurlijk sprake van diverse opties ten aanzien van het wel of niet zichtbaar zijn van groepen. Toch is dat niet helemaal waar, want de clusters rond werk, connecties rond een ngo, rond een kraakpand of een uitgaansgelegenheid, zijn wél heel duidelijk. Een logischer verklaring is dat de activiteit meer is ingebed in de bestaande structuur, waardoor er geen losse groep is. Dit is zo als een activiteit niet in een officieel club- of groepsverband, zoals de twee dansgroepen van Barbara, plaatsvindt. Bij Margriet lijkt dit te kloppen, ook ten aanzien van haar oude klasgenoten uit het verleden. “Het groepje helemaal bovenin zijn oud-klasgenoten van mijn middelbare school met enkele geïsoleerde contacten”, analyseert Margriet haar eigen Facebook-beeld.

Barbara laat eigenlijk twee wolken zien. Een familie- en vriendenzwerm met daarnaast twee aparte wolken oud-vrienden of bekenden waar nu minder contact mee wordt onderhouden. Deze wolk staat los van haar leven in Nederland, maar is vergelijkbaar met de danswolken van Margriet, gescheiden werelden met enkele connecties. Bij de wolk met het werk van Barbara boven en het werk van Evert beneden wordt het centrum ingenomen door uitgaan, persoonlijke contacten en wat activisme en kraakpanden. Een duidelijke aparte groep sport, cultuur of natuur heeft Barbara niet binnen haar netwerk. Barbara en Margriet zijn wel weer in dat centrum aan elkaar gekoppeld. Margriet en Barbara komen terug in het netwerk van Laurence.

Sociaal netwerk vastleggen

Na het zien van zijn Facebook-beeld vraagt Laurence zich af hoeveel hij gaat veranderen aan zijn gedrag op Facebook. “Het geeft een duidelijk inzicht in mijn sociale leven”, constateert hij. Hij merkt daarbij op dat een analyse van de gegevens van Facebook duidelijk maakt waar de zwakke schakels in zijn leven zich bevinden. “Dat is best scarry, vreemd om dat op deze manier te ontdekken”, voegt hij eraan toe.

Ook Margriet is verbaasd over hoe scherp haar sociale leven door Facebook in kaart wordt gebracht. “Als je weet in welke thema´s die zwermen of wolken geïnteresseerd zijn, heb je een schat aan informatie. Door een combinatie te maken van individuen en groepen wordt dat snel duidelijk. Je weet echter niet alleen welke thema’s centraal staan, ook is zichtbaar hoe bepaalde informatie kan worden verspreid binnen die specifieke netwerken. Je weet namelijk wie actief is, een centrale rol speelt, veel connecties heeft, ga zo maar door. En dan hoeft het niet alleen om commerciële boodschappen te gaan”, laat zij weten.

Facebook legt iemands sociale leven vast, maar doet dat echter niet alleen. Mensen leveren het internetbedrijf hun persoonlijke data. Die informatie is misschien onschuldig als je naar de elementen kijkt, maar wie de data in context plaatst kan een foto maken van de bezoekers van de dertigste verjaardag van Margriet. Die foto weerspiegelt haar sociale leven, niet alleen op die dag. Het plaatst mensen in groepen, kleurt politieke ideeën in en geeft inzicht in verhoudingen.

Wie van alle groepen en personen van de Facebook-pagina van Margriet (hetzelfde geldt voor Barbara en Laurence) een grafisch beeld maakt, kan een nog specifiekere sociale geschiedenis van haar optekenen. Jeugd, school, universiteit, arbeidsverleden, vrije tijd, activisme en politieke voorkeuren kunnen deels worden ingevuld. Misschien niet perfect, maar Facebook bezit zoveel data dat die perfectie in de loop der jaren naar alle waarschijnlijkheid zal toenemen.

Vraag is natuurlijk of je je hele sociale leven op straat wilt leggen, of dat je daar zelf controle over wilt hebben en/of houden. Dat is een persoonlijke keuze die niets met privacydiscussies te maken heeft. ‘Wat deel je met een bedrijf en daarbij indirect met de wereld?’, is de primaire vraag die je jezelf moet stellen.

Maikel van Leeuwen

Facebook beelden, beelden van je Facebook pagina of een social graph

De bijgevoegde beelden zijn een visualisatie van de Facebook-pagina’s van Margriet, Barbara en Laurence. De verbindingen in het netwerk, de lijnen tussen de rondjes worden ook wel ‘edges’ genoemd. In het Nederlands vertaald zou dit randen of richels betekenen. Eigenlijk zegt dit al iets over de voorzichtigheid die in acht genomen moet worden bij interpretaties. Randen en richels zijn geen diepe verbindingen. Een verbinding kan een ‘like’ zijn of een opmerking.

Een node, een rondje in de beelden, is een persoon en/of organisatie binnen het netwerk. Nodes en edges vormen tezamen een social graph, een Facebook beeld. De grote van een node bepaald de ‘populariteit’ in het netwerk. Met allerlei berekeningen kan bepaald worden hoe populair een node is. De populariteit wordt bepaald door het aantal likes, maar ook door de mate van activiteit door het versturen van berichten of het krijgen van berichten.

Find this story at 30 June 2014

Facebook, het ultieme inlichtingenbedrijf

Je kan Facebook vergelijken met de people’s secret service. Want hoe je het wendt of keert, uiteindelijk zijn het de internetgebruikers die vrijwillig hun persoonlijke leven prijsgeven aan het commerciële bedrijfsleven en de overheid.

In het voorafgaande artikel ‘Met de billen bloot op Facebook’, over Facebook profiling, gaat het vooral over netwerken waarin mensen zich bevinden. Profiling bestaat echter niet alleen uit netwerkanalyse, maar ook uit identificatie-analyse. Wie de profielen van Margriet, Barbara en Laurens bekijkt, krijgt niet alleen de beschikking over veel informatie met betrekking tot familie, vrienden, kennissen, heden en verleden, maar ook voorkeuren.

Facebook fungeert als een soort inlichtingendienst. Het verzamelt informatie die mensen zelf aanleveren. Het lijkt anders dan het persoonsdossier van de inlichtingendienst die bestaat uit krantenartikelen, mediaoptredens, twitterberichten, Facebook postings, vergadernotulen, observatieverslagen en telefoon-/internettaps.

Verzamelwoede

Wie echter inzoomt op de Facebook-data constateert dat het commerciële internetbedrijf dezelfde data beheert die inlichtingendiensten proberen te verzamelen. Informatie die betrekking heeft op persoonlijke voorkeuren, relaties, contacten en netwerken. Het verschil met inlichtingendiensten is dat Facebook niets hoeft te doen voor die dataverzameling. Zij stelt een gratis dienst ter beschikking. Gratis betekent in dit geval natuurlijk niet voor niets. Daarbij gaat het bij Facebook allang niet meer om de advertenties die je moet tolereren.

Het Facebook-beeld van de drie aan ons visualisatie-/analyseproject deelnemende activisten geeft een aantal zaken weer, waar inlichtingendiensten ook constant naar op zoeken zijn. Wie zijn je vrienden (op Facebook), bij welke actiegroepen en politieke partijen ben je aangesloten en voor welke demonstraties en acties heb je sympathie. Likes spelen daarbij natuurlijk een rol, maar het gaat ook om de interactie en de berichten die gekoppeld zijn aan die politieke voorkeuren.

Met de subscribe knop kan Facebook waarde koppelen aan verschillende persoonlijke relaties die je hebt. De relaties en voorkeuren leggen je sociale leven bloot. Naar welke muziek luister je graag, welke boeken lees je en hoe (e-book of papier), favoriete tv-series, de sport die je beoefent, de games die je speelt als je je even verveelt.

Dit is geen persoonlijke informatie die Facebook aan mensen ontfutselt. Het is geen inlichtingendienst die je bibliotheekgegevens opvraagt om te weten welk boek je hebt geleend. Mensen leveren Facebook al hun data aan, vrijwillig zonder dwang. Hooguit kun je spreken van groepsdruk, druk van het argument ‘iedereen gebruikt het’.

De kennis van het bedrijf over je sociale leven strekt zich uit van je voornaam, achternaam, geboortedatum en leeftijd tot aan je afkomst, je gezicht (scan), zeg maar je GBA (Gemeentelijke Basis Administratie) gegevens. Maar Facebook beschikt ook over gegevens betreffende je basisschool, middelbare school en hoger onderwijs (DUO Dienst Uitvoering Onderwijs gegevens), tot aan je seksuele oriëntatie (GGD gegevens in verband met inentingen en tests op HIV en andere seksueel overdraagbare ziekten), je huidige en voormalig werkgevers (UWV en DWI gegevens), partner en ex-partners (GBA gegevens), creditcard gegevens (banken/ en creditcard bedrijven), fysieke locatie (telefoonmaatschappijen en internetproviders). Gegevens die overheidsdiensten bij allerlei instanties moeten opvragen.

Duistere achterkant

Mensen delen veel en Facebook vraagt ook regelmatig om je profiel volledig in te vullen. Inlichtingendiensten kloppen bij al die instanties aan, of proberen direct en constant toegang tot deze data te verkrijgen. Aan de achterkant weet je eigenlijk niet wat Facebook met die data doet. Advertenties koppelen aan je profiel is de meest opvallende, maar die achterkant blijft duister.

Met de onthullingen van Edward Snowden over datastofzuiger de National Security Agency (NSA), is duidelijk geworden dat multinationals zoals Facebook, maar ook Google, Microsoft en Apple hun achterdeur voor de inlichtingendienst(en) open zetten. Het Amerikaanse blad Mother Jones kopte in december 2013: ‘Where Does Facebook Stop and the NSA Begin?’

Toch is deze weergave niet helemaal correct. Het lijkt eerder op de omgekeerde wereld. ‘Waar stoppen de inlichtingendiensten en gaat het bedrijf Facebook verder?’ Dit zit niet in de interacties tussen je vrienden op Facebook of met groepen; Facebook wil veel meer van je weten en of je nu ingelogd bent of niet, het bedrijf zorgt ervoor dat ze op de hoogte is van wat er in je leven gebeurt.

Eind oktober 2013 vertelde analist Ken Rudin van Facebook dat het bedrijf testen uitvoert om cursorbewegingen van individuele burgers op Facebook vast te leggen en te analyseren. Het gaat hierbij om het vastleggen van bewegingen met de muis naar locaties op het scherm om vervolgens daaruit af te kunnen leiden wat de interesse is van de afzonderlijke gebruikers.

De analist beweerde dat dit van doen heeft met het plaatsen van advertenties, maar uiteindelijk heeft het te maken met het profileren van de gebruikers. Zo kan Facebook namelijk ook zien of de persoon tijdens het gebruik van het sociale medium naar zijn telefoon of computerscherm kijkt, hoe lang, wanneer en wat. Allemaal nog wel gerelateerd aan de website van het bedrijf, maar met aanvullende functies als de sport applicatie pedometer kan Facebook vastleggen hoe lang, waar en wanneer je hebt hardgelopen of gewandeld.

Het duurt niet lang meer voordat het bedrijf je algehele conditie heeft doorgrond als aanvullende applicaties over bloeddruk en hartslag beschikbaar worden gesteld. Zo ontwikkelt Google een slimme contactlens voor diabetes patiënten en investeert zij in bedrijven die zicht bezig houden met diabetes. Allemaal gratis natuurlijk. Of is het niet gratis?

Rijkdom aan informatie

Facebook wil graag toegang tot de microfoon van je computer, headset of telefoon en dringt daarmee door tot in je huiskamer waar je, misschien zonder Facebook, naar muziek luistert die de private inlichtingendienst dan kan horen en toevoegen aan het profiel van jou. Hoe iemand zich voelt komt zo binnen bereik van het bedrijf. Want zelfs als je een andere naam, adres, geboortedatum en andere zaken opgeeft, volgt het bedrijf je. Het legt het apparaat vast waarmee je op Facebook bent ingelogd, met bijbehorend ip-adres, maar ook je locatie, tijdzone, datum en tijdstip.

Ook al vermijdt je te liken, het private bedrijf volgt je stappen langs de ‘events’ en ‘checkins’, de websites die je bezoekt zodra je geobserveerd wordt door Facebook. Zo kan het bedrijf kennis nemen van jouw favoriete café’s, restaurants, gerechten en recepten. Het inlichtingenbedrijf legt vast welke mensen je regelmatig bezoekt, op welke locaties je fotografeert en filmt, met wie je chat en welke groepen je in de gaten houdt.

Foto’s en films die worden toegevoegd aan Facebook bevatten een rijke verzameling aan extra gegevens, zoals met welk apparaat ze zijn gemaakt, sluitertijd en diafragma, locatie, auteur, tijd etc. En van de chats en berichten bewaart Facebook ook de uitgewiste stukken tekst. Je wilt een vriend schrijven dat hij een ‘eikel’ is, maar wist dat vervolgens uit omdat je je vriendschap ermee niet op het spel wilt zetten. Je stuurt deze vriend een berichtje dat je geen tijd hebt dit weekend. Facebook weet zo meer over jouw relatie met die vriend, dan die vriend zelf.

Bij dit alles moet worden aangetekend dat je ingelogd moet zijn op ‘jouw’ Facebook-pagina en dat je bepaalde applicaties hebt geïnstalleerd en aan hebt staan, zoals GPS voor je plaatsbepaling en de pedometer voor je hardloop tracking. Met de gezichtsherkenning kunnen mensen die onzichtbaar willen zijn door Facebook uit de duisternis worden getrokken. Niet door het bedrijf zelf, maar door haar medewerkers, de gebruikers van Facebook. Wie gaat er echter zo bewust mee om? En vooral: wie staat er bij stil dat jouw identiteit altijd gekoppeld is aan die van anderen?

In die zin is Facebook de ultieme inlichtingendienst. Het bedrijf is de people’s secret service, wat zoveel inhoudt dat de individuele gebruikers het werk als medewerkers van de geheime dienst vervullen voor henzelf en voor anderen. De Stasi zou haar vingers erbij aflikken, vandaar dat inlichtingendiensten als de NSA graag toegang hebben tot de achterkant van Facebook, het bedrijf zal de vergaarde informatie ook zonder blikken of blozen vrijgeven.

Steun overheid van belang

Het vrijgeven van vergaarde persoonlijke informatie aan inlichtingendiensten heeft met verschillende aspecten te maken. Ten eerste zijn grote internationaal opererende bedrijven afhankelijker van hun ‘nationale regering’ dan ze zelf zullen toegeven. Zo onderhoudt Shell innige relaties met zowel de Nederlandse als de Britse regering om haar operaties in het buitenland veilig te stellen. Als er ergens iets misgaat wordt zowel het diplomatieke als het militaire apparaat van het ‘thuisland’ gemobiliseerd.

Amerikaanse bedrijven als Facebook en Google zullen dat ook doen. Als Gmail (Google) wordt gehackt zal het bedrijf de Amerikaanse overheid over haar schouder laten meekijken, maar diezelfde overheid zal langs diplomatieke, en zo nodig langs militaire (in dit geval waarschijnlijk cyber-militaire) weg, het bedrijf ondersteunen. Dat is natuurlijk niet vreemd, Google is een groot bedrijf en van belang voor de Amerikaanse economie en hegemonie.

Hetzelfde geldt voor Facebook waarvan diverse accounts begin 2013 werden gehackt. De Amerikaanse overheid zal vaak worden gevraagd om ‘officieel’ te reageren om in veel gevallen de Chinezen terecht te wijzen. Die steun van de Amerikaanse overheid is echter ook niet gratis. Zij zal druk op de bedrijven uitoefenen om de achterdeur voor inlichtingendiensten als de NSA, maar ook de FBI, open te houden. Je kunt bijna spreken van public private partnership waarbij multinationals vaak niet voor de overheidsdiensten hoeven te betalen.

Daarnaast proberen overheden een zo fijnmazig netwerk te creëren van diplomatieke posten en ambassades om hun land in het buitenland te promoten. Die posten hebben ook een andere functie. De onthullingen van Snowden hebben onderstreept dat de Amerikanen en bevriende naties als Canada, Australië, Groot-Brittannië en Nieuw Zeeland die diplomatieke posten gebruiken om in de gehele wereld de mogelijkheid te hebben om af te luisteren.

Werknemers van internationale bedrijven die wereldwijd opereren, zullen regelmatig gevraagd worden bij te dragen aan de inlichtingenoperatie van die overheid. Zij zitten vaak op andere plaatsen dan de diplomaten die vaak onder een vergrootglas in het buitenland opereren. Landen proberen dus hun zicht op het buitenland, de concurrenten en de vijanden zo scherp mogelijk te krijgen. Hoe meer diplomatieke posten en meewerkende bedrijven hoe scherper het beeld. Facebook probeert eigenlijk hetzelfde te doen met de data die zij van haar gebruikers verzamelt en analyseert. Hoe meer data, hoe scherper de analyses en hoe meer zicht op de verbanden en de interacties, hoe helderder het beeld.

In de loop der jaren heeft Facebook het aantal pixels (punten) van de foto van de profielen vergroot. Uiteindelijk werkt zij aan een haarscherp beeld van het profiel van haar gebruikers. Facebook realiseert dit zelf ook. Op 12 november 2013 diende het bedrijf een patent aanvraag in, de ‘178th patent application for a consumer profiling technique the company calls inferring household income for users of a social networking system’ (Facebook Future Plans for Data Collection Beyond All Imagination, 4 december 2013).

Deze aanvraag omschrijft de hoeveelheid en diversiteit van de data. Facebook heeft sinds oktober 2013 1,189,000,000 actieve gebruikers in de gehele wereld. Die data gebruikt het bedrijf nu al voor sociaal wetenschappelijk onderzoek door het Data Science Team van het bedrijf. Dit team doet niet alleen onderzoek naar de data, maar gebruikt de gebruikers ook als onderzoeksmateriaal, zoals het onderzoek van Eytan Bakshy die de toegang tot materiaal van 250 miljoen gebruikers manipuleerde. De gebruikers als proefpersonen in de ideale wereld van Facebook.

Gewoon een bedrijf

Uiteindelijk is Facebook natuurlijk geen gratis dienst. In eerste instantie zal het vooral proberen geld te verdienen door de verkoop van gebruikersprofielen ten bate van gerichte advertenties. De volgende stap heeft het bedrijf al gezet door de voorwaarden die ze stelt voor het aanmaken van een Facebook-pagina. Foto’s en films zijn in principe niet meer je eigendom. Het bedrijf kan die foto’s rechtenvrij gebruiken. Dit geldt natuurlijk ook voor Instagram. Andere bedrijven als Twitter (Twitpic’s) doen hetzelfde.

Wat geldt voor foto’s en films geldt natuurlijk ook voor teksten en andere content die je aan je pagina toevoegt. Voor deze zeer positieve voorwaarden is het bedrijf natuurlijk ook afhankelijk van de Amerikaanse overheid. Mocht wetgeving plotseling veranderen en gebruikers geld van Facebook eisen, zou dat voor het bedrijf een ramp zijn.

Afgaande op de totale omvang van de gebruikers, bijna evenveel als China inwoners heeft, en data die het bedrijf heeft verzameld en beheert, zou je bijna vergeten dat het gewoon een bedrijf is met 6.818 werknemers, een raad van bestuur en aandeelhouders die uiteindelijk graag hun inleg en winst terug willen zien.

De raad van bestuur geeft inzicht op welke plek Facebook in de markt staat. De verschillende leden zijn uit allerlei windstreken gerekruteerd. Natuurlijk Google Inc., Microsoft Corp. en haar dochter Skype, eBay Inc. en andere technologiebedrijven, maar ook entertainment bedrijven als Netflix en Walt Disney Co. Alsmede enkele banken, zoals Morgan Stanley & Co. LLC en Credit Suisse, maar ook The World Bank Group.

Contacten met de politiek zijn ook van belang, dat vindt plaats via oud-leden van de White House Chief of Staff of de US Department of The Treasury. En tot slot natuurlijk de ouderwetse consumentenmarkt zoals General Motors Corp. en Starbucks. Met een klein aantal leden van de raad van bestuur dekt Facebook haar contacten in verleden en heden af. Uiteindelijk is en blijft het gewoon een bedrijf, misschien wel een private inlichtingendienst, maar zonder winst zullen op een gegeven moment de aandeelhouders weglopen en valt het om.

Alternatieven?

Wat er met de vergaarde persoonlijke data gebeurt zodra het bedrijf failliet gaat, is onduidelijk. De overname van Whatsapp door Facebook liet zien dat gebruikers zich plots beducht waren voor hun persoonlijke informatie en op zoek gingen naar ‘alternatieven’.

Aan de bestaande alternatieven kleven eigenlijk dezelfde bezwaren als aan Facebook. Google Hangouts maakt onderdeel uit van een andere multinational, net als Imessenger. Bij Telegram werd geroepen dat het een Russische app is, maar waarom dat slechter is dan een Amerikaanse Whatsapp, los van de technische mankementen, blijft onduidelijk.

Uiteindelijk gaat het niet meer om een discussie over privacy, burgerrechten en/of bigdata. Facebook verzamelt namelijk niet zelf de data, zij verkrijgt het van haar gebruikers/agenten en die staan de persoonsgebonden informatie geheel vrijwillig af. Los van groepsdruk is er geen sprake van drang en dwang.

De Facebook inlichtingendienst stelt indirect solidariteit en de minderheid centraal. Hoe solidair zijn wij met mensen die misschien wel op de foto willen maar niet getagd wllen worden. Die wel iets schrijven maar niet geliked willen worden. Die wel op de foto willen maar zich niet op Facebook geplaatst willen zien. Het debat rond Facebook gaat ook over pluriformiteit versus uniformiteit. Over allemaal hetzelfde, Facebook, of iedereen iets anders.

Facebook registreert in meer of mindere mate het volgende van jou:

Je cursor bewegingen, geo-locatie, meta-data van foto’s en films (type, sluitertijd, geo-locatie indien gps aanstaat, auteur), biometrische gegevens (scan van het gezicht, matched zeer goed), chat gegevens en tekst die je zelfs niet verstuurd hebt, likes, vrienden, politieke kleur, seksuele oriëntatie, school, afkomst, ip-adressen van ingelogd zijn, wanneer je op je scherm van de smart device kijkt, welke smart devices waarmee je op Facebook bent, wanneer je achter de computer zit, hoelang je achter de computer zit, leeftijd, voormalige werkgevers en huidige, partner, familie, vrienden, ex-vriendinnen, muziek, boeken, films, series, games, sport, interacties tussen alle vrienden groepen, welke websites je bezoekt als je ingelogd bent, welke content je bekijkt op Facebook zelf, andere biometrische gegevens (zoals stappen, algehele conditie met sport applicaties).

Find this story at 30 June 2014

Facebook reveals news feed experiment to control emotions Protests over secret study involving 689,000 users in which friends’ postings were moved to influence moods

Poll: Facebook’s secret mood experiment: have you lost trust in the social network?

It already knows whether you are single or dating, the first school you went to and whether you like or loathe Justin Bieber. But now Facebook, the world’s biggest social networking site, is facing a storm of protest after it revealed it had discovered how to make users feel happier or sadder with a few computer key strokes.

It has published details of a vast experiment in which it manipulated information posted on 689,000 users’ home pages and found it could make people feel more positive or negative through a process of “emotional contagion”.

In a study with academics from Cornell and the University of California, Facebook filtered users’ news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users’ exposure to their friends’ “positive emotional content”, resulting in fewer positive posts of their own. Another test reduced exposure to “negative emotional content” and the opposite happened.

The study concluded: “Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks.”

Lawyers, internet activists and politicians said this weekend that the mass experiment in emotional manipulation was “scandalous”, “spooky” and “disturbing”.

On Sunday evening, a senior British MP called for a parliamentary investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them.

Jim Sheridan, a member of the Commons media select committee, said the experiment was intrusive. “This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people,” he said. “They are manipulating material from people’s personal lives and I am worried about the ability of Facebook and others to manipulate people’s thoughts in politics or other areas. If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it.”

A Facebook spokeswoman said the research, published this month in the journal of the Proceedings of the National Academy of Sciences in the US, was carried out “to improve our services and to make the content people see on Facebook as relevant and engaging as possible”.

She said: “A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow.”

But other commentators voiced fears that the process could be used for political purposes in the runup to elections or to encourage people to stay on the site by feeding them happy thoughts and so boosting advertising revenues.

In a series of Twitter posts, Clay Johnson, the co-founder of Blue State Digital, the firm that built and managed Barack Obama’s online campaign for the presidency in 2008, said: “The Facebook ‘transmission of anger’ experiment is terrifying.”

He asked: “Could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy [a website aggregating viral content] posts two weeks beforehand? Should that be legal?”

It was claimed that Facebook may have breached ethical and legal guidelines by not informing its users they were being manipulated in the experiment, which was carried out in 2012.

The study said altering the news feeds was “consistent with Facebook’s data use policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research”.

But Susan Fiske, the Princeton academic who edited the study, said she was concerned. “People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty.”

James Grimmelmann, professor of law at Maryland University, said Facebook had failed to gain “informed consent” as defined by the US federal policy for the protection of human subjects, which demands explanation of the purposes of the research and the expected duration of the subject’s participation, a description of any reasonably foreseeable risks and a statement that participation is voluntary. “This study is a scandal because it brought Facebook’s troubling practices into a realm – academia – where we still have standards of treating people with dignity and serving the common good,” he said on his blog.

It is not new for internet firms to use algorithms to select content to show to users and Jacob Silverman, author of Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, told Wire magazine on Sunday the internet was already “a vast collection of market research studies; we’re the subjects”.

“What’s disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission,” he said. “Facebook cares most about two things: engagement and advertising. If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there’s little reason to think that they won’t do just that. As long as the platform remains such an important gatekeeper – and their algorithms utterly opaque – we should be wary about the amount of power and trust we delegate to it.”

Robert Blackie, director of digital at Ogilvy One marketing agency, said the way internet companies filtered information they showed users was fundamental to their business models, which made them reluctant to be open about it.

“To guarantee continued public acceptance they will have to discuss this more openly in the future,” he said. “There will have to be either independent reviewers of what they do or government regulation. If they don’t get the value exchange right then people will be reluctant to use their services, which is potentially a big business problem.”

Robert Booth
The Guardian, Monday 30 June 2014

Find this story at 30 June 2014

© 2014 Guardian News and Media Limited or its affiliated companies. All rights reserved.

Facebook’s Future Plans for Data Collection Beyond All Imagination

Facebook’s dark plans for the future are given away in its patent applications.

“No one knows who will live in this cage in the future, or whether at the end of this tremendous development, entirely new prophets will arise, or there will be a great rebirth of old ideas and ideals, or, if neither, mechanized petrification, embellished with a sort of convulsive self-importance. For of the fast stage of this cultural development, it might well be truly said: ‘Specialists without spirit, sensualists without heart; this nullity imagines that it has attained a level of civilization never before achieved.’”

—Max Weber, 1905

On November 12 Facebook, Inc. filed its 178th patent application for a consumer profiling technique the company calls “inferring household income for users of a social networking system.”

“The amount of information gathered from users,” explain Facebook programmers Justin Voskuhl and Ramesh Vyaghrapuri in their patent application, “is staggering — information describing recent moves to a new city, graduations, births, engagements, marriages, and the like.” Facebook and other so-called tech companies have been warehousing all of this information since their respective inceptions. In Facebook’s case, its data vault includes information posted as early as 2004, when the site first went live. Now in a single month the amount of information forever recorded by Facebook —dinner plans, vacation destinations, emotional states, sexual activity, political views, etc.— far surpasses what was recorded during the company’s first several years of operation. And while no one outside of the company knows for certain, it is believed that Facebook has amassed one of the widest and deepest databases in history. Facebook has over 1,189,000,000 “monthly active users” around the world as of October 2013, providing considerable width of data. And Facebook has stored away trillions and trillions of missives and images, and logged other data about the lives of this billion plus statistical sample of humanity. Adjusting for bogus or duplicate accounts it all adds up to about 1/7th of humanity from which some kind of data has been recorded.

According to Facebook’s programmers like Voskuhl and Vyaghrapuri, of all the clever uses they have already applied this pile of data toward, Facebook has so far “lacked tools to synthesize this information about users for targeting advertisements based on their perceived income.” Now they have such a tool thanks to the retention and analysis of variable the company’s positivist specialists believe are correlated with income levels.

They’ll have many more tools within the next year to run similar predictions. Indeed, Facebook, Google, Yahoo, Twitter, and the hundreds of smaller tech lesser-known tech firms that now control the main portals of social, economic, and political life on the web (which is now to say everywhere as all economic and much social activity is made cyber) are only getting started. The Big Data analytics revolutions has barely begun, and these firms are just beginning to tinker with rational-instrumental methods of predicting and manipulating human behavior.

There are few, if any, government regulations restricting their imaginations at this point. Indeed, the U.S. President himself is a true believer in Big Data; the brain of Obama’s election team was a now famous “cave” filled with young Ivy League men (and a few women) sucking up electioneering information and crunching demographic and consumer data to target individual voters with appeals timed to maximize the probability of a vote for the new Big Blue, not IBM, but the Democratic Party’s candidate of “Hope” and “Change.” The halls of power are enraptured by the potential of rational-instrumental methods paired with unprecedented access to data that describes the social lives of hundreds of millions.

Facebook’s intellectual property portfolio reads like cliff notes summarizing the aspirations of all corporations in capitalist modernity; to optimize efficiency in order to maximize profits and reduce or externalize risk. Unlike most other corporations, and unlike previous phases in the development of rational bureaucracies, Facebook and its tech peers have accumulated never before seen quantities of information about individuals and groups. Recent breakthroughs in networked computing make analysis of these gigantic data sets fast and cheap. Facebook’s patent holdings are just a taste of what’s arriving here and now.

The way you type, the rate, common mistakes, intervals between certain characters, is all unique, like your fingerprint, and there are already cyber robots that can identify you as you peck away at keys. Facebook has even patented methods of individual identification with obviously cybernetic overtones, where the machine becomes an appendage of the person. U.S. Patents 8,306,256, 8,472,662, and 8,503,718, all filed within the last year, allow Facebook’s web robots to identify a user based on the unique pixelation and other characteristics of their smartphone’s camera. Identification of the subject is the first step toward building a useful data set to file among the billion or so other user logs. Then comes analysis, then prediction, then efforts to influence a parting of money.

Many Facebook patents pertain to advertising techniques that are designed and targeted, and continuously redesigned with ever-finer calibrations by robot programs, to be absorbed by the gazes of individuals as they scroll and swipe across their Facebook feeds, or on third party web sites.

Speaking of feeds, U.S. Patent 8,352,859, Facebook’s system for “Dynamically providing a feed of stories about a user of a social networking system” is used by the company to organize the constantly updated posts and activities inputted by a user’s “friends.” Of course embedded in this system are means of inserting advertisements. According to Facebook’s programmers, a user’s feeds are frequently injected with “a depiction of a product, a depiction of a logo, a display of a trademark, an inducement to buy a product, an inducement to buy a service, an inducement to invest, an offer for sale, a product description, trade promotion, a survey, a political message, an opinion, a public service announcement, news, a religious message, educational information, a coupon, entertainment, a file of data, an article, a book, a picture, travel information, and the like.” That’s a long list for sure, but what gets injected is more often than not whatever will boost revenues for Facebook.

The advantage here, according to Facebook, is that “rather than having to initiate calls or emails to learn news of another user, a user of a social networking website may passively receive alerts to new postings by other users.” The web robot knows best. Sit back and relax and let sociality wash over you, passively. This is merely one of Facebook’s many “systems for tailoring connections between various users” so that these connections ripple with ads uncannily resonant with desires and needs revealed in the quietly observed flow of e-mails, texts, images, and clicks captured forever in dark inaccessible servers of Facebook, Google and the like. These communications services are free in order to control the freedom of data that might otherwise crash about randomly, generating few opportunities for sales.

Where this fails Facebook ratchets up the probability of influencing the user to behave as a predictable consumer. “Targeted advertisements often fail to earn a user’s trust in the advertised product,” explain Facebook’s programmers in U.S. Patent 8,527,344, filed in September of this year. “For example, the user may be skeptical of the claims made by the advertisement. Thus, targeted advertisements may not be very effective in selling an advertised product.” Facebook’s computer programmers who now profess mastery over sociological forces add that even celebrity endorsements are viewed with skepticism by the savvy citizen of the modulated Internet. They’re probably right.

Facebook’s solution is to mobilize its users as trusted advertisers in their own right. “Unlike advertisements, most users seek and read content generated by their friends within the social networking system; thus,” concludes Facebook’s mathematicians of human inducement, “advertisements generated by a friend of the user are more likely to catch the attention of the user, increasing the effectiveness of the advertisement.” That Facebook’s current So-And-So-likes-BrandX ads are often so clumsy and ineffective does not negate the qualitative shift in this model of advertising and the possibilities of un-freedom it evokes.

Forget iPhones and applications, the tech industry’s core consumer product is now advertising. Their essential practice is mass surveillance conducted in real time through continuous and multiple sensors that pass, for most people, entirely unnoticed. The autonomy and unpredictability of the individual —in Facebook’s language the individual is the “user”— is their fundamental business problem. Reducing autonomy via surveillance and predictive algorithms that can placate existing desires, and even stimulate and mold new desires is the tech industry’s reason for being. Selling their capacious surveillance and consumer stimulus capabilities to the highest bidder is the ultimate end.

Sounds too dystopian? Perhaps, and this is by no means the world we live in, not yet. It is, however, a tendency rooted in the tech economy. The advent of mobile, hand-held, wirelessly networked computers, called “smartphones,” is still so new that the technology, and its services feel like a parallel universe, a new layer of existence added upon our existing social relationships, business activities, and political affiliations. In many ways it feels liberating and often playful. Our devices can map geographic routes, identify places and things, provide information about almost anything in real time, respond to our voices, and replace our wallets. Who hasn’t consulted “Dr. Google” to answer a pressing question? Everyone and everything is seemingly within reach and there is a kind of freedom to this utility.

Most of Facebook’s “users” have only been registered on the web site since 2010, and so the quintessential social network feels new and fun, and although perhaps fraught with some privacy concerns, it does not altogether fell like a threat to the autonomy of the individual. To say it is, is a cliche sci-fi nightmare narrative of tech-bureaucracy, and we all tell one another that the reality is more complex.

Privacy continues, however, too be too narrowly conceptualized as a liberal right against incursions of government, and while the tech companies have certainly been involved in a good deal of old-fashioned mass surveillance for the sake of our federal Big Brother, there’s another means of dissolving privacy that is more fundamental to the goals of the tech companies and more threatening to social creativity and political freedom.

Georgetown University law professor Julie Cohen notes that pervasive surveillance is inimical to the spaces of privacy that are required for liberal democracy, but she adds importantly, that the surveillance and advertising strategies of the tech industry goes further.

“A society that permits the unchecked ascendancy of surveillance infrastructures, which dampen and modulate behavioral variability, cannot hope to maintain a vibrant tradition of cultural and technical innovation,” writes Cohen in a forthcoming Harvard Law Review article.

“Modulation” is Cohen’s term for the tech industry’s practice of using algorithms and other logical machine operations to mine an individual’s data so as to continuously personalize information streams. Facebook’s patents are largely techniques of modulation, as are Google’s and the rest of the industry leaders. Facebook conducts meticulous surveillance on users, collects their data, tracks their movements on the web, and feeds the individual specific content that is determined to best resonate with their desires, behaviors, and predicted future movements. The point is to perfect the form and function of the rational-instrumental bureaucracy as defined by Max Weber: to constantly ratchet up efficiency, calculability, predictability, and control. If they succeed in their own terms, the tech companies stand to create a feedback loop made perfectly to fit each an every one of us, an increasingly closed systems of personal development in which the great algorithms in the cloud endlessly tailor the psychological and social inputs of humans who lose the gift of randomness and irrationality.

“It is modulation, not privacy, that poses the greater threat to innovative practice,” explains Cohen. “Regimes of pervasively distributed surveillance and modulation seek to mold individual preferences and behavior in ways that reduce the serendipity and the freedom to tinker on which innovation thrives.” Cohen has pointed out the obvious irony here, not that it’s easy to miss; the tech industry is uncritically labeled America’s hothouse of innovation, but it may in fact be killing innovation by disenchanting the world and locking inspiration in an cage.

If there were limits to the reach of the tech industry’s surveillance and stimuli strategies it would indeed be less worrisome. Only parts of our lives would be subject to this modulation, and it could therefore benefit us. But the industry aspires to totalitarian visions in which universal data sets are constantly mobilized to transform an individual’s interface with society, family, the economy, and other institutions. The tech industry’s luminaries are clear in their desire to observe and log everything, and use every “data point” to establish optimum efficiency in life as the pursuit of consumer happiness. Consumer happiness is, in turn, a step toward the rational pursuit of maximum corporate profit. We are told that the “Internet of things” is arriving, that soon every object will have embedded within it a computer that is networked to the sublime cloud, and that the physical environment will be made “smart” through the same strategy of modulation so that we might be made free not just in cyberspace, but also in the meatspace.

Whereas the Internet of the late 1990s matured as an archipelago of innumerable disjointed and disconnected web sites and databases, today’s Internet is gripped by a handful of giant companies that observe much of the traffic and communications, and which deliver much of the information from an Android phone or laptop computer, to distant servers, and back. The future Internet being built by the tech giants —putting aside the Internet of things for the moment— is already well into its beta testing phase. It’s a seamlessly integrated quilt of web sites and apps that all absorb “user” data, everything from clicks and keywords to biometric voice identification and geolocation.

United States Patent 8,572,174, another of Facebook’s recent inventions, allows the company to personalize a web page outside of Facebook’s own system with content from Facebook’s databases. Facebook is selling what the company calls its “rich set of social information” to third party web sites in order to “provide personalized content for their users based on social information about those users that is maintained by, or otherwise accessible to, the social networking system.” Facebook’s users generated this rich social information, worth many billions of dollars as recent quarterly earnings of the company attest.

In this way the entire Internet becomes Facebook. The totalitarian ambition here is obvious, and it can be read in the securities filings, patent applications, and other non-sanitized business documents crafted by the tech industry for the financial analysts who supply the capital for further so-called innovation. Everywhere you go on the web, with your phone or tablet, you’re a “user,” and your social network data will be mined every second by every application, site, and service to “enhance your experience,” as Facebook and others say. The tech industry’s leaders aim to expand this into the physical world, creating modulated advertising and environmental experiences as cameras and sensors track our movements.

Facebook and the rest of the tech industry fear autonomy and unpredictability. The ultimate expression of these irrational variables that cannot be mined with algorithmic methods is absence from the networks of surveillance in which data is collected.

One of Facebook’s preventative measures is United States Patent 8,560,962, “promoting participation of low-activity users in social networking system.” This novel invention devised by programmers in Facebook’s Palo Alto and San Francisco offices involves a “process of inducing interactions,” that are meant to maximize the amount of “user-generated content” on Facebook by getting lapsed users to return, and stimulating all users to produce more and more data. User generated content is, after all, worth billions. Think twice before you hit “like” next time, or tap that conspicuously placed “share” button; a machine likely put that content and interaction before your eyes after a logical operation determined it to have the highest probability of tempting you to add to the data stream, thereby increasing corporate revenues.

Facebook’s patents on techniques of modulating “user” behavior are few compared to the real giants of the tech industry’s surveillance and influence agenda. Amazon, Microsoft, and of course Google hold some of the most fundamental patents using personal data to attempt to shape an individual’s behavior into predictable consumptive patterns. Smaller specialized firms like Choicestream and Gist Communications have filed dozens more applications for modulation techniques. The rate of this so-called innovation is rapidly telescoping.

Perhaps we do know who will live in the iron cage. It might very well be a cage made of our own user generated content, paradoxically ushering in a new era of possibilities in shopping convenience and the delivery of satisfactory experiences even while it eradicates many degrees of chance, and pain, and struggle (the motive forces of human progress) in a robot-powered quest to have us construct identities and relationships that yield to prediction and computer-generated suggestion. Defense of individual privacy and autonomy today is rightly motivated by the reach of an Orwellian security state (the NSA, FBI, CIA). This surveillance changes our behavior by chilling us, by telling us we are always being watched by authority. Authority thereby represses in us whatever might happen to be defined as “crime,” or any anti-social behavior at the moment. But what about the surveillance that does not seek to repress us, the watching computer eyes and ears that instead hope to stimulate a particular set of monetized behaviors in us with the intimate knowledge gained from our every online utterance, even our facial expressions and finger movements?

Darwin Bond-Graham, a contributing editor to CounterPunch, is a sociologist and author who lives and works in Oakland, CA. His essay on economic inequality in the “new” California economy appears in theJuly issue of CounterPunch magazine. He is a contributor to Hopeless: Barack Obama and the Politics of Illusion

Darwin Bond-Graham, a contributing editor to CounterPunch, is a sociologist and author who lives and works in Oakland, CA. His essay on economic inequality in the “new” California economy appears in theJuly issue of CounterPunch magazine. He is a contributor to Hopeless: Barack Obama and the Politics of Illusion

By Darwin Bond-Graham
December 4, 2013

Find this story at 4 December 2013

copyright http://www.alternet.org/

Who Owns Photos and Videos Posted on Facebook, Instagram or Twitter?

Well, it depends on what you mean as “own.” Under copyright law, unless there is an agreement to the contrary or the photograph or video is shot as part of your job, a copyright to a photograph generally belongs to the creator. As the copyright owner, you own the exclusive rights to display, copy, use, produce, distribute and perform your creation as you see fit and approve. As the subject of the photograph, you have a right to publicity, which allows you to get paid for the commercial use of your name, likeness or voice.
But what happens when you decide to post that picture on the Internet — perhaps on Facebook or Twitter (using Twitpic), or some other social network or photo-sharing site?
You may be shocked to find out that once you post on these sites, that although you still “own” the photograph, you grant the social media sites a license to use your photograph anyway they see fit for free AND you grant them the right to let others use you picture as well! This means that not only can Twitter, Twitpic and Facebook make money from the photograph or video (otherwise, a copyright violation), but these sites are making commercial gain by licensing these images, which contains the likeness of the person in the photo or video (otherwise, a violation of their “rights of publicity”).
Facebook
Under Facebook’s current terms (which can change at anytime), by posting your pictures and videos, you grant Facebook “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any [IP] content that you post on or in connection with Facebook (“IP License”). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it. Beware of the words “transferable, sub-licensable, royalty-free, worldwide license.” This means that Facebook can license your content to others for free without obtaining any other approval from you! You should be aware that once your photos or videos are shared on Facebook, it could be impossible to delete them from Facebook, even if you delete the content or cancel your account (the content still remains on Facebook servers and they can keep backups)! So, although you may be able to withdraw your consent to the use of photos on Facebook, you should also keep in mind that if you share your photos and videos with Facebook applications, those applications may have their own terms and conditions of how they use your creation! You should read the fine print to make sure you are not agreeing to something that you don’t want to have happen.
Twitter
Twitter’s photo sharing service, Twitpic, just updated their terms of Service on May 10, 2011 (which, of course, can and will be updated at any time, from time to time). By uploading content using Twitpic, you are giving “Twitpic permission to use or distribute your content on Twitpic.com or affiliated sites.” You are also granting “Twitpic a worldwide, non-exclusive, royalty-free, sublicenseable and transferable license to use, reproduce, distribute, prepare derivative works of, display, and perform the Content in connection with the Service and Twitpic’s (and its successors’ and affiliates’) business, including without limitation for promoting and redistributing part or all of the Service (and derivative works thereof) in any media formats and through any media channels.”
The terms go on to state that you also grant “each user of the Service a non-exclusive license to access your Content through the Service, and to use, reproduce, distribute, display and perform such Content as permitted through the functionality of the Service and under these Terms of Service. The above licenses granted by you in media Content you submit to the Service terminate within a commercially reasonable time after you remove or delete your media from the Service provided that any sub-license by Twitpic to use, reproduce or distribute the Content prior to such termination may be perpetual and irrevocable.”
Twitpic/Twitter is probably more problematic than Facebook — They can sell your images and videos if they want!
First, there is no definition of “Service” on their site (they need to find a more detailed oriented internet attorney to draft their terms (Twitpic, call me)), so your photo could be used throughout the Internet. More troubling is that your photos and videos may be reprinted and used in anything without your getting paid a dime – books, magazines, movies, TV shows, billboards — you get the picture!
Second, Twitter can create derivative works from your creations. A derivative work is anything that is built upon your work (like adding your video to a TV show, putting your photo in a montage, etc.).
Third, even after you delete your photos from Twitpic, Twitter and Twitpic can still use your creations for a “reasonable” amount of time afterwards. So what would be a reasonable amount of time to continue using your photo after you terminate the “license” if your photo or video is incorporated by Twitter or Twitpic in a larger work — perhaps forever if it would cost them money to remove!
Lastly, since Twitter/Twitpic can grant others to use your photos (and make money from it without paying you (remember the nasty word “œroyalty-free”)), even if you terminate your Twitter/Twitpic account, the rights they grant to others can never be terminated! Twitter has a deal with World Entertainment News Network permitting them to sell Twitpic content with no money to you!
Celebrities and celebrities-to-be, beware! Your right to publicity (e.g. your right to get paid when others use your name, likeness, voice for commercial gain like product or sports endorsements) is stripped away each and every time you post on Twitter! You or your intellectual property attorney should read the fine print before you post your photos or videos on Twitter or Facebook!
December 19, 2012 UPDATE
Instagram
Well Facebook was at it again (changing their terms of service for their latest acquisition, Instagram). The proposed changes are to take place on January 16, 2013. Basically, Instagram had a brilliant idea to generate money off the backs of their members. The proposed terms of service explicitly state “To help us deliver interesting paid or sponsored content or promotions, you agree that a business or other entity may pay us to display your username, likeness, photos (along with any associated metadata), and/or actions you take, in connection with paid or sponsored content or promotions, without any compensation to you. If you are under the age of eighteen (18), or under any other applicable age of majority, you represent that at least one of your parents or legal guardians has also agreed to this provision (and the use of your name, likeness, username, and/or photos (along with any associated metadata)) on your behalf.”
This means that Instagram can make money from advertisers that want to use your face or pictures of your loved ones on any advertising (TV, web, magazines, newspapers, etc.) and never pay you a penny! Even worse, if you are under 18 (which means you don’t have the legal capacity to enter into a contract) you are making a contractual agreement that you have asked your parents permission to agree to the Instagram terms. This not only is an egregious position (see discussion above about rights of publicity), but defies logic — Instagram acknowledges that minors can’t enter into a contract, but nevertheless for the under-18, force them to agree by (unenforceable) contract that they have permission anyway. Go figure! [Finally there is a reason to go back to the old 2-hour Kodak Carousel slide shows of aunt Sally’s vacation.]
[December 21, 2012 UPDATE]
Instagram announced today that it was backing off of its proposed T&C’s to be able to sell content without paying the members. But a closer look of their replacement terms of use are just as bad. “Instagram does not claim ownership of any Content that you post on or through the Service. Instead, you hereby grant to Instagram a non-exclusive, fully paid and royalty-free, transferable, sub-licensable, worldwide license to use the Content that you post on or through the Service, subject to the Service’s Privacy Policy. . .” This means that Instagram can still sublicense your photos to any company for a fee (without paying the member)! And it gets worse. For instance, let’s say a posted photo is of a celebrity. Instagram then licenses that picture to an advertiser. But then the advertiser gets sued by the celebrity for violation of their right of privacy (who in turn sues Instagram). You the poster would have to indemnify Instagram because in section 4(iii) of the terms, “(iii) you agree to pay for all royalties, fees, and any other monies owed by reason of Content you post on or through the Service.” Bottom line – Instagram stil gets to sell your pictures without paying you and you can be liable in the event they have to return that money to the advertiser!

Find this story at december 2012

© 2012 Law Offices of Craig Delsack, LLC

Your Facebook Data File: Everything You Never Wanted Anyone to Know

A group of Austrian students called Europe v. Facebook recently got their hands on their complete Facebook user data files – note, this is not the same file Facebook sends if you request your personal history through the webform in Account Settings.

See, Facebook wants you to feel safe and warm and fuzzy about controlling your own privacy. As we move into the era of the Open Graph and apps that autopost your activities, users are raising serious questions about data collection and privacy.

To help quell these fears, Facebook lets users download their their own data, as they said in an official statement to the Wall Street Journal blog Digits:

“We believe that every Facebook user owns his or her own data and should have simple and easy access to it. That is why we’ve built an easy way for people to download everything they have ever posted on Facebook, including all of their messages, posts, photos, status updates and profile information. People who want a copy of the information they have put on Facebook can click a link located in ‘Account Settings’ and easily get a copy of all of it in a single download. To protect the information, this feature is only available after the person confirms his or her password and answers appropriate security questions.”

Phew, that’s good. But wait… how come the students over at Europe v. Facebook got a different, more complete file when requested through Section 4 DPA + Art. 12 Directive 95/46/EG, a European privacy law? The carefully crafted statement above says they will give you access to everything you’ve put on Facebook – but what about the data Facebook collects without your knowledge?

What You May Not Get in Your Copy of Your Facebook File

facebook-message-report

On their website, Europe v. Facebook lists their primary objective as transparency, saying, “It is almost impossible for the user to really know what happens to his or her personal data when using facebook. For example ‘removed’ content is not really deleted by Facebook and it is often unclear what Facebook exactly does with our data.”

Indeed, the complete user file they received when requested through Section 4 DPA + Art. 12 Directive 95/46/EG is the same one available to attorneys and law enforcement via court order. It contains more information than the one Facebook sends users through their webform, according to Europe v. Facebook founder and law student Max Schrems, including:

Every friend request you’ve ever received and how you responded.
Every poke you’ve exchanged.
Every event you’ve been invited to through Facebook and how you responded.
The IP address used each and every time you’ve logged in to Facebook.
Dates of user name changes and historical privacy settings changes.
Camera metadata including time stamps and latitude/longitude of picture location, as well as tags from photos – even if you’ve untagged yourself.
Credit card information, if you’ve ever purchased credits or advertising on Facebook.
Your last known physical location, with latitude, longitude, time/date, altitude, and more. The report notes that they are unsure how Facebook collects this data.
One of Europe v. Facebook’s chief objections is that Facebook offers “no sufficient way of deleting old junk data.” Many of the complaints they’ve filed with the Irish Data Protection Commissioner* involve Facebook’s continued storage of data users believe they have deleted. Copies of the redacted files received through their requests are published on the Europe v. Facebook website.

Better Hope You’ve Behaved Yourself…

Ever flirted with someone other than your spouse in a Facebook chat? You had better hope your message records don’t end up in the hands of a divorce lawyer, because they can access even the ones you’ve deleted.

That day you called your employer in Chicago and begged off work, as you were sick? You logged in to Facebook from an IP address in Miami. Oops.

A few weeks ago, Australian hacker exposed Facebook’s practice of tracking logged out users and they quickly “fixed” the problem (after trying to defend it, initially). But the extent to which they collect and keep information users may not even realize they are giving Facebook in the first place – or believe they’ve deleted – is worrisome for privacy watchdogs.

The truly questionable thing is, the average user has no idea what their file contains and in North America, at least, have no right to access it. ITWorld’s Dan Tynan requested his, citing the U.S. Constitution, but received only an autoresponse telling him the form is only applicable in certain jurisdictions. In other words, if they’re not required to release your data to you by law, don’t hold your breath.

But then, maybe you’ll be one of the “lucky” ones who will have your activities brought up in court or a police investigation. There will be little left to the imagination, then.

What You Can Do About It

We contacted Max Schrems and asked whether Europe v. Facebook is able to help users, even those in other jurisdictions, to access their personal files. Though they receive emails from around the world, he said, their focus is on the 22 active complaints they currently have registered with the Irish Data Protection Commission. Residents of the European Union can fill out the online form on Facebook’s website (this is not the Account Settings form, but a request for the full file).

Schrems did offer tips for all users who want to curb the amount of information they’re handing over to Facebook from this point forward. “I would frequently check my privacy settings, turn everything to ‘Friends only’ and turn off ‘Platform.’ Users have to realize that you don’t just share with your Friends, but you always share with your Friends AND Facebook.”

Judging by the sheer difference in file sizes, comparing the personally requested vs. legally requested files Schrems and Europe v. Facebook received, there’s a lot of data left on the table. For the same user, the file sizes varied enormously. Schrems described the file obtained through a legal request as a 500MB PDF including data the user thought they had deleted. The one sent through a regular Facebook request was a 150MB HTML file and included video (the PDF did not) but did not have the deleted data.

We reached out to Facebook for comment but had not received a response by the time of publication.

*Europe v. Facebook files their complaints in Ireland, as Facebook’s User Terms list their Ireland office as headquarters for all Facebook affairs outside of Canada and the U.S.

Miranda Miller, October 3, 2011

Find this story at 3 October 2011

© 2014 Incisive Interactive Marketing LLC.

What Facebook Knows

The company’s social scientists are hunting for insights about human behavior. What they find could give Facebook new ways to cash in on our data—and remake our view of society.

Cameron Marlow calls himself Facebook’s “in-house sociologist.” He and his team can analyze essentially all the information the site gathers.

If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy”) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

FEW PRIVACY REGULATIONS INHIBIT FACEBOOK

Laws haven’t kept up with the company’s ability to mine its users’ data.
Even as Facebook has embedded itself into modern life, it hasn’t done that much with what it knows about us. Its stash of data looms like an oversize shadow. Everyone has a feeling that this resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

Eytan Bakshy experimented with the way Facebook users shared links so that his group could study whether the site functions like an echo chamber.

For one example of how Facebook can serve as a proxy for examining society at large, consider a recent study of the notion that any person on the globe is just six degrees of separation from any other. The best-known real-world study, in 1967, involved a few hundred people trying to send postcards to a particular Boston stockholder. Facebook’s version, conducted in collaboration with researchers from the University of Milan, involved the entire social network as of May 2011, which amounted to more than 10 percent of the world’s population. Analyzing the 69 billion friend connections among those 721 million people showed that the world is smaller than we thought: four intermediary friends are usually enough to introduce anyone to a random stranger. “When considering another person in the world, a friend of your friend knows a friend of their friend, on average,” the technical paper pithily concluded. That result may not extend to everyone on the planet, but there’s good reason to believe that it and other findings from the Data Science Team are true to life outside Facebook. Last year the Pew Research Center’s Internet & American Life Project found that 93 percent of Facebook friends had met in person. One of Marlow’s researchers has developed a way to calculate a country’s “gross national happiness” from its Facebook activity by logging the occurrence of words and phrases that signal positive or negative emotion. Gross national happiness fluctuates in a way that suggests the measure is accurate: it jumps during holidays and dips when popular public figures die. After a major earthquake in Chile in February 2010, the country’s score plummeted and took many months to return to normal. That event seemed to make the country as a whole more sympathetic when Japan suffered its own big earthquake and subsequent tsunami in March 2011; while Chile’s gross national happiness dipped, the figure didn’t waver in any other countries tracked (Japan wasn’t among them). Adam Kramer, who created the index, says he intended it to show that Facebook’s data could provide cheap and accurate ways to track social trends—methods that could be useful to economists and other researchers.

Other work published by the group has more obvious utility for Facebook’s basic strategy, which involves encouraging us to make the site central to our lives and then using what it learns to sell ads. An early study looked at what types of updates from friends encourage newcomers to the network to add their own contributions. Right before Valentine’s Day this year a blog post from the Data Science Team listed the songs most popular with people who had recently signaled on Facebook that they had entered or left a relationship. It was a hint of the type of correlation that could help Facebook make useful predictions about users’ behavior—knowledge that could help it make better guesses about which ads you might be more or less open to at any given time. Perhaps people who have just left a relationship might be interested in an album of ballads, or perhaps no company should associate its brand with the flood of emotion attending the death of a friend. The most valuable online ads today are those displayed alongside certain Web searches, because the searchers are expressing precisely what they want. This is one reason why Google’s revenue is 10 times Facebook’s. But Facebook might eventually be able to guess what people want or don’t want even before they realize it.

Recently the Data Science Team has begun to use its unique position to experiment with the way Facebook works, tweaking the site—the way scientists might prod an ant’s nest—to see how users react. Eytan Bakshy, who joined Facebook last year after collaborating with Marlow as a PhD student at the University of Michigan, wanted to learn whether our actions on Facebook are mainly influenced by those of our close friends, who are likely to have similar tastes. That would shed light on the theory that our Facebook friends create an “echo chamber” that amplifies news and opinions we have already heard about. So he messed with how Facebook operated for a quarter of a billion users. Over a seven-week period, the 76 million links that those users shared with each other were logged. Then, on 219 million randomly chosen occasions, Facebook prevented someone from seeing a link shared by a friend. Hiding links this way created a control group so that Bakshy could assess how often people end up promoting the same links because they have similar information sources and interests.

He found that our close friends strongly sway which information we share, but overall their impact is dwarfed by the collective influence of numerous more distant contacts—what sociologists call “weak ties.” It is our diverse collection of weak ties that most powerfully determines what information we’re exposed to.

That study provides strong evidence against the idea that social networking creates harmful “filter bubbles,” to use activist Eli Pariser’s term for the effects of tuning the information we receive to match our expectations. But the study also reveals the power Facebook has. “If [Facebook’s] News Feed is the thing that everyone sees and it controls how information is disseminated, it’s controlling how information is revealed to society, and it’s something we need to pay very close attention to,” Marlow says. He points out that his team helps Facebook understand what it is doing to society and publishes its findings to fulfill a public duty to transparency. Another recent study, which investigated which types of Facebook activity cause people to feel a greater sense of support from their friends, falls into the same category.

Facebook is not above using its platform to tweak users’ behavior, as it did by nudging them to register as organ donors. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

But Marlow speaks as an employee of a company that will prosper largely by catering to advertisers who want to control the flow of information between its users. And indeed, Bakshy is working with managers outside the Data Science Team to extract advertising-related findings from the results of experiments on social influence. “Advertisers and brands are a part of this network as well, so giving them some insight into how people are sharing the content they are producing is a very core part of the business model,” says Marlow.

Facebook told prospective investors before its IPO that people are 50 percent more likely to remember ads on the site if they’re visibly endorsed by a friend. Figuring out how influence works could make ads even more memorable or help Facebook find ways to induce more people to share or click on its ads.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

Sameet Agarwal figures out ways for Facebook to manage its enormous trove of data—giving the company a unique and valuable level of expertise.

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

Doubling Data

Like other social-Web companies, such as Twitter, Facebook has never attained the reputation for technical innovation enjoyed by such Internet pioneers as Google. If Silicon Valley were a high school, the search company would be the quiet math genius who didn’t excel socially but invented something indispensable. Facebook would be the annoying kid who started a club with such social momentum that people had to join whether they wanted to or not. In reality, Facebook employs hordes of talented software engineers (many poached from Google and other math-genius companies) to build and maintain its irresistible club. The technology built to support the Data Science Team’s efforts is particularly innovative. The scale at which Facebook operates has led it to invent hardware and software that are the envy of other companies trying to adapt to the world of “big data.”

In a kind of passing of the technological baton, Facebook built its data storage system by expanding the power of open-source software called Hadoop, which was inspired by work at Google and built at Yahoo. Hadoop can tame seemingly impossible computational tasks—like working on all the data Facebook’s users have entrusted to it—by spreading them across many machines inside a data center. But Hadoop wasn’t built with data science in mind, and using it for that purpose requires specialized, unwieldy programming. Facebook’s engineers solved that problem with the invention of Hive, open-source software that’s now independent of Facebook and used by many other companies. Hive acts as a translation service, making it possible to query vast Hadoop data stores using relatively simple code. To cut down on computational demands, it can request random samples of an entire data set, a feature that’s invaluable for companies swamped by data. Much of Facebook’s data resides in one Hadoop store more than 100 petabytes (a million gigabytes) in size, says Sameet Agarwal, a director of engineering at Facebook who works on data infrastructure, and the quantity is growing exponentially. “Over the last few years we have more than doubled in size every year,” he says. That means his team must constantly build more efficient systems.

One potential use of Facebook’s data storehouse would be to sell insights mined from it. Such information could be the basis for any kind of business. Assuming Facebook can do this without upsetting users and regulators, it could be lucrative.

All this has given Facebook a unique level of expertise, says Jeff Hammerbacher, Marlow’s predecessor at Facebook, who initiated the company’s effort to develop its own data storage and analysis technology. (He left Facebook in 2008 to found Cloudera, which develops Hadoop-based systems to manage large collections of data.) Most large businesses have paid established software companies such as Oracle a lot of money for data analysis and storage. But now, big companies are trying to understand how Facebook handles its enormous information trove on open-source systems, says Hammerbacher. “I recently spent the day at Fidelity helping them understand how the ‘data scientist’ role at Facebook was conceived … and I’ve had the same discussion at countless other firms,” he says.

As executives in every industry try to exploit the opportunities in “big data,” the intense interest in Facebook’s data technology suggests that its ad business may be just an offshoot of something much more valuable. The tools and techniques the company has developed to handle large volumes of information could become a product in their own right.

Mining for Gold

Facebook needs new sources of income to meet investors’ expectations. Even after its disappointing IPO, it has a staggeringly high price-to-earnings ratio that can’t be justified by the barrage of cheap ads the site now displays. Facebook’s new campus in Menlo Park, California, previously inhabited by Sun Microsystems, makes that pressure tangible. The company’s 3,500 employees rattle around in enough space for 6,600. I walked past expanses of empty desks in one building; another, next door, was completely uninhabited. A vacant lot waited nearby, presumably until someone invents a use of our data that will justify the expense of developing the space.

One potential use would be simply to sell insights mined from the information. DJ Patil, data scientist in residence with the venture capital firm Greylock Partners and previously leader of LinkedIn’s data science team, believes Facebook could take inspiration from Gil Elbaz, the inventor of Google’s AdSense ad business, which provides over a quarter of Google’s revenue. He has moved on from advertising and now runs a fast-growing startup, Factual, that charges businesses to access large, carefully curated collections of data ranging from restaurant locations to celebrity body-mass indexes, which the company collects from free public sources and by buying private data sets. Factual cleans up data and makes the result available over the Internet as an on-demand knowledge store to be tapped by software, not humans. Customers use it to fill in the gaps in their own data and make smarter apps or services; for example, Facebook itself uses Factual for information about business locations. Patil points out that Facebook could become a data source in its own right, selling access to information compiled from the actions of its users. Such information, he says, could be the basis for almost any kind of business, such as online dating or charts of popular music. Assuming Facebook can take this step without upsetting users and regulators, it could be lucrative. An online store wishing to target its promotions, for example, could pay to use Facebook as a source of knowledge about which brands are most popular in which places, or how the popularity of certain products changes through the year.

Hammerbacher agrees that Facebook could sell its data science and points to its currently free Insights service for advertisers and website owners, which shows how their content is being shared on Facebook. That could become much more useful to businesses if Facebook added data obtained when its “Like” button tracks activity all over the Web, or demographic data or information about what people read on the site. There’s precedent for offering such analytics for a fee: at the end of 2011 Google started charging $150,000 annually for a premium version of a service that analyzes a business’s Web traffic.

Back at Facebook, Marlow isn’t the one who makes decisions about what the company charges for, even if his work will shape them. Whatever happens, he says, the primary goal of his team is to support the well-being of the people who provide Facebook with their data, using it to make the service smarter. Along the way, he says, he and his colleagues will advance humanity’s understanding of itself. That echoes Zuckerberg’s often doubted but seemingly genuine belief that Facebook’s job is to improve how the world communicates. Just don’t ask yet exactly what that will entail. “It’s hard to predict where we’ll go, because we’re at the very early stages of this science,” says ­Marlow. “The number of potential things that we could ask of Facebook’s data is enormous.”

Tom Simonite is Technology Review’s senior IT editor.
By Tom Simonite on June 13, 2012 20 COMMENTS

Find this story at 13 June 2012

copyright http://www.technologyreview.com/

How Facebook Uses Your Data to Target Ads, Even Offline

If you feel like Facebook has more ads than usual, you aren’t imagining it: Facebook’s been inundating us with more and more ads lately, and using your information—both online and offline—to do it. Here’s how it works, and how you can opt out.

For most people, Facebook’s advertising system is insider-baseball that doesn’t really affect how we use the service. But as the targeted ads—the advertisements that take the data you provide to offer ads specific to you—get more accurate and start pulling in information from other sources (including the stuff you do offline), it’s more important than ever to understand their system. To figure out how this all works, I spoke with Elisabeth Diana, manager of corporate communication at Facebook. Let’s kick it off with the basics of how the targeted ads work online before moving on to some of the changes we’ll see with the recent inclusion of offline shopping data.

How Facebook Uses Your Profile to Target Ads

How Facebook Uses Your Data to Target Ads, Even Offline
We’ve talked before about how Facebook uses you to annoy your friends by turning your likes into subtle ads. This method of sponsored posts is deceptively simple.

The most obvious example of a targeted ad uses something you like—say Target—and then shows an ad on the right side or in the newsfeed that simply says, “[Name] likes Target.” What you and your friends like helps determine what everyone on your friends list sees for ads. Any ad you click on then increases the likelihood of another similar ad.

It’s not just what you and your friends are doing that generates ads though; it’s also basic demographic information. Diana notes that this also includes “major life events like getting engaged or married.” So, if you’re recently engaged and note that on Facebook, you’ll see ads about things like wedding planning.

When an advertiser creates an ad on Facebook, they can select all sorts of parameters so they reach the right people. A simple example of a parameter would be: “Someone engaged to be married, who lives in New York, between the ages of 20-30.” That’s simple, but advertisers can actually narrow that down to insane specifics, like “Someone engaged to be married, who lives in New York, between the ages of 20-30, who likes swimming, and who drives a BMW.” If your profile fits those parameters, you’ll likely see the ad. If you want to see how it works, you can even try your hand at creating an ad.

It boils down to this: the more information you put about yourself on Facebook—where you live, your age, where (and if) you graduated college, the companies, brands, and activities you like, and even where you work—determines what kind of ads you’ll see. In theory, it makes it so targeted ads are more relevant to you.

What Happens When You Don’t Like or Share Anything

How Facebook Uses Your Data to Target Ads, Even Offline
The way Facebook targets ads is based a lot around the information you provide. Using your likes, location, or age, Facebook puts you in a demographic and advertises to you. But what happens when you don’t include any of that information on your profile? It turns out that your friends are used to fill in the gaps.

Chances are, even a barebones profile has a few bits of information about you. You probably at least have where you live and your age. That combined with the information your friends provide creates a reasonable demographic that advertisers can still reach you at. The ads won’t be as spookily accurate to you as if you provide a lot of data, but they’ll at least be about as accurate as a television ad on your favorite show.

How to Keep Facebook from Targeting Ads Online

We know Facebook has an idea of what you’re doing online. That can be unsettling if you’re concerned about your privacy and you don’t want your online habits contributing to advertisements, or if you don’t like the idea of Facebook collecting data about you that you’re not willfully providing. You’ll “miss out” on targeted ads, but here here are a few tools to keep that from happening online:

Facebook Disconnect for Chrome and Firefox: Facebook gets notified when you visit a page that uses Facebook Connect (the little “Like” button you find on most web sites, including ours), and that data can be used to target ads. Facebook Disconnect stops that flow of data.
Facebook Privacy List for Adblock Plus: This subscription for Adblock Plus blocks Facebook plugins and scripts from running all over the web so your browsing data doesn’t get tied to your Facebook account.
DoNotTrackMe: DoNotTrackMe is another extension that blocks trackers and anyone who wants to collect your browsing data to create targeted ads.
Finally, you want to opt out of the Facebook Ads that use your actions (liking a page, sharing pages, etc) to promote ads to your friends:

Click the lock icon when you’re logged into Facebook and select “see more settings”.
Click the “Ads” tab on the sidebar.
Click “Edit” under “Third Party Sites” and change the setting to “No one.”
Click “Edit” under “Ads & Friends” and select “No One.” This disables Social Ads.
So, that takes care of the online advertising. Be sure to check out our guide to Facebook privacy for more information about all that. You can also hide your likes from your profile so they’re not as prominant. If you don’t actually mind the advertising, but want to improve the ads shown to you, you can always click the “X” next to any ad to get rid of it.

The Always Up-to-Date Guide to Managing Your Facebook Privacy
Keeping your Facebook info private is getting harder and harder all the time—mostly because…
Read more
How Facebook Uses Your Real World Shopping to Target Ads

How Facebook Uses Your Data to Target Ads, Even Offline
EXPAND
Of course, you probably knew about a lot of that already. Using information in Facebook profile to target ads is old news, but with a few recent partnerships, Facebook is also going to use what you buy in real life stores to influence and track the ads you see. It sounds spooky, but it’s also older than you may realize.

To do this, Facebook is combining the information they have with information from data collection companies like Datalogix, Acxiom, Epsilon, and BlueKai. These companies already collect information about you through things like store loyalty cards, mailing lists, public records information (including home or car ownership), browser cookies, and more. For example, if you buy a bunch of detergent at Safeway, and use your Safeway card to get a discount, that information is cataloged and saved by a company like Datalogix.

How much do these data collecting companies know? According to The New York Times: way more than you’d think, including race, gender, economic status, buying habits, and more. Typically, they then sell this data to advertisers or corporations, but when it’s combined with your information from Facebook, they get an even better idea of what you like, where you shop, and what you buy. As Diana describes it, Facebook is “trying to give advertisers a chance to reach people both on and off Facebook,” and make advertisements more relevant to you. Photo by Joe Loong.

How Real-Life Ad Targetting Works

How Facebook Uses Your Data to Target Ads, Even Offline

The most shocking thing you’re going to find on Facebook is when something you do in the real world—say, buy a car, go shopping with a loyalty card at a grocery store, or sign up for an email list—actually impacts the ads you see. This is no different than any other direct marketing campaign like junk mail, but seeing it on Facebook might be a little unsettling at first. There are a couple reasons this might happen: custom audiences, and the recent partnerships with data collection companies we talked about earlier.

Custom audiences are very simple and it basically allows an advertiser to upload an email list and compare that data (privately) with who’s on Facebook. Diana offered the simple example of buying a car. Let’s say you purchase a car from a dealership, and when you do so, you give them your email address. That dealership wants to advertise on Facebook, so they upload a list of all the email addresses they have. That data is then made private, and Facebook pairs the email address with the one you registered on Facebook. If they match, you might see an ad from that dealership on Facebook for a discounted tune-up or something similar. Additionally, Lookalike audiences might be used to advertise to people similar to you because you purchased a car there. That might mean your friends (assuming you’re all similar) will see the same ad from the dealership.

The custom audiences can be used by any company advertising on Facebook. So, if you’re on your dentist’s email list, or that small bakery around the corner snagged your email for a free slice of pie, they can potentially reach you through this system.

The partnership with other data collection agencies like Acxiom and Datalogix is going to look a little different. This means that when you use something like a customer loyalty card at a grocery store, you might see a targeted ad that reflects that. The New York Times offers this example:

At the very least, said Ms. Williamson, an analyst with the research firm eMarketer, consumers will be “forced to become more aware of the data trail they leave behind them and how companies are putting all that data together in new ways to reach them.” She knows, for instance, that if she uses her supermarket loyalty card to buy cornflakes, she can expect to see a cornflakes advertisement when she logs in to Facebook.
A new targeting feature, Partner categories, takes the data collected by these third-party data brokers and puts you into a group. So, if you’re in a group of people who buys a lot of frozen pizza at Safeway, you’ll see ads for frozen pizza, and maybe other frozen foods.

It sounds a little weird at a glance, but it’s important to remember that this is all information that you’re already providing. Facebook is using data collected by outside companies to create a more accurate portrayal of you so marketers can advertise to you directly.

How Your Data Is Kept Private

How Facebook Uses Your Data to Target Ads, Even Offline
All of this information being exchanged should make the hairs on the back of your neck stand up a little. If anything goes wrong, it could leak a bunch of your private information all over the place. Or, at the very least, marketers would get a lot more information about you then you want like your username, email, and location data. To keep your information private, Facebook uses a system called hashing.

First, your personal information like email and name is encrypted. So, your name, login info, and anything else that would identify you as a person goes away. Then, Facebook turns the rest of the information into a series of numbers and letters using hashing. For example, Age: 31, Likes: Lifehacker, Swimming, BMW’s, Location: New York, turns into something like, “342asafk43255adjk.” Finally, this information is combined with what the data collection companies have on you to create a better picture of your shopping habits so they can target ads. Slate describes the system like so:

What they came up with was a Rube Goldbergian system that strips out personally identifiable information from the databases at Facebook, Datalogix, and the major retailers while still matching people and their purchases. The system works by creating three separate data sets. First, Datalogix “hashes” its database—that is, it turns the names, addresses and other personally identifiable data for each person in its logs into long strings of numbers. Facebook and retailers do the same thing to their data. Then, Datalogix compares its hashed data with Facebook’s to find matches. Each match indicates a potential test subject-someone on Facebook who is also part of Datalogix’s database. Datalogix runs a similar process with retailers’ transaction data. At the end of it all, Datalogix can compare the Facebook data and the retail data, but, importantly, none of the databases will include any personally identifiable data—so Facebook will never find out whether and when you, personally, purchased Tide, and Procter & Gamble and Kroger will never find out your Facebook profile.
From the actual advertisers point of view, the flow of information doesn’t reveal personal details. It just tells them how many potential customers might see an ad. “An advertiser would learn something like, ‘about 50% of your customers are on Facebook,'” says Diana, “But they don’t know who you are.” Image by Jorge Stolfi.

How to Opt Out of Offline Targetting

How Facebook Uses Your Data to Target Ads, Even Offline
EXPAND
Unlike the internal advertising system that uses the information you already provide to Facebook to give you ads, these new partnerships with real world data collection agencies go way beyond that. Now, they’re able to see what you’re buying at stores offline, and that’s disconcerting for a lot of people. The goal, of course, is more relevant ads, but that comes at the price of privacy and security. With all this data out there, it would be easy to get a very clear image of who you are, where you live, what you like, and even if you’re pregnant. Thankfully, opting out of the data collection companies also gets you out of the integration with Facebook (and everywhere else).

This process is a lot more complicated than it should be, but the Electronic Frontier Foundation has a step-by-step guide for each of the data brokers. Basically, you’ll need to opt out in three different places: Acxiom, Datalogix, and Epsilon in order to ensure your shopping data in the real world isn’t used on Facebook (and beyond). BlueKai, unfortunately, has no direct way to opt out so you’ll need to use the browser extensions listed in the first section.

If you really want to keep those loyalty cards from tracking you, just use Jenny’s number (867-5309) at the checkout lane instead of setting up an account.

Use “Jenny’s Number” to Get Club Discounts at Stores Without Providing Personal Information
When you go to the grocery store you’re always asked to sign up for a rewards card, which…
Read more
Those are the basics of how Facebook’s various targeted advertising systems work. Of course, a lot of complex math and algorithms are in place to actually generate this data, but it really boils down to how much information you’re making public—whether you’re aware of it or not—that makes the system tick. If you like the targeted ads, they should improve even more as the years go on. If you don’t, opting out is always an option.

Thorin Klosowski
4/11/13 8:00am

Find this story at 4 November 2013

copyright http://lifehacker.com/

Facebook Tests Software to Track Your Cursor on Screen

Facebook Inc.FB -0.24% is testing technology that would greatly expand the scope of data that it collects about its users, the head of the company’s analytics group said Tuesday.

The social network may start collecting data on minute user interactions with its content, such as how long a user’s cursor hovers over a certain part of its website, or whether a user’s newsfeed is visible at a given moment on the screen of his or her mobile phone, Facebook analytics chief Ken Rudin said Tuesday during an interview.

Facebook’s Ken Rudin
Mr. Rudin said the captured information could be added to a data analytics warehouse that is available for use throughout the company for an endless range of purposes–from product development to more precise targeting of advertising.

Facebook collects two kinds of data, demographic and behavioral. The demographic data—such as where a user lives or went to school—documents a user’s life beyond the network. The behavioral data—such as one’s circle of Facebook friends, or “likes”—is captured in real time on the network itself. The ongoing tests would greatly expand the behavioral data that is collected, according to Mr. Rudin. The tests are ongoing and part of a broader technology testing program, but Facebook should know within months whether it makes sense to incorporate the new data collection into the business, he said

New types of data Facebook may collect include “did your cursor hover over that ad … and was the newsfeed in a viewable area,” Mr. Rudin said. “It is a never-ending phase. I can’t promise that it will roll out. We probably will know in a couple of months,” said Mr. Rudin, a Silicon Valley veteran who arrived at Facebook in April 2012 from Zynga Inc.ZNGA -0.31%, where he was vice president of analytics and platform technologies.

As the head of analytics, Mr. Rudin is preparing the company’s infrastructure for a massive increase in the volume of its data.

Facebook isn’t the first company to contemplate recording such activity. Shutterstock Inc.SSTK +0.11%, a marketplace for digital images, records literally everything that its users do on the site. Shutterstock uses the open-source Hadoop distributed file system to analyze data such as where visitors to the site place their cursors and how long they hover over an image before they make a purchase. “Today, we are looking at every move a user makes, in order to optimize the Shutterstock experience….All these new technologies can process that,” Shutterstock founder and CEO Jon Oringer told the Wall Street Journal in March.

Facebook also is a major user of Hadoop, an open-source framework that is used to store large amounts of data on clusters of inexpensive machines. Facebook designs its own hardware to store its massive data analytics warehouse, which has grown 4,000 times during the last four years to a current level of 300 petabytes. The company uses a modified version of Hadoop to manage its data, according to Mr. Rudin. There are additional software layers on top of Hadoop, which rank the value of data and make sure it is accessible.

The data in the analytics warehouse—which is separate from the company’s user data, the volume of which has not been disclosed—is used in the targeting of advertising. As the company captures more data, it can help marketers target their advertising more effectively—assuming, of course, that the data is accessible.

“Instead of a warehouse of data, you can end up with a junkyard of data,” said Mr. Rudin, who spoke to CIO Journal during a break at the Strata and Hadoop World Conference in New York. He said that he has led a project to index that data, essentially creating an internal search engine for the analytics warehouse.

October 30, 2013, 7:15 AM ET
By STEVE ROSENBUSH

Find this story at 30 October 2013

Copyright ©2014 Dow Jones & Company, Inc

Report: Facebook Is Collecting Data on Your Cursor Movements

Facebook may be adding to the list of things it knows about you.

The social network is reportedly experimenting with new technology that tracks and collects data about a user’s activity on the site, including cursor movements, according to the Wall Street Journal. The technology is being tested now with a small group of users.

SEE ALSO: How to Change Your Facebook Relationship Status Without Alerting Friends

The data could be used in a number of different ways, from product development to advertising, Facebook analytics chief Ken Rudin told the Journal.

The technology can supposedly determine where a user is hovering his or her cursor on the screen, meaning it could be used to determine the most appropriate places for advertisements. The technology also tracks whether Facebook’s mobile users can see their News Feed at any particular time from their smartphone.

Facebook did not immediately respond to Mashable’s request for comment.

Facebook will reportedly decide “within months” whether or not to continue this data collection and analysis. It could be relevant for targeted advertising where Facebook has already seen quarter-over-quarter growth in 2013.

Facebook is set to reports the company’s quarterly earnings Wednesday afternoon.

UPDATE, Oct. 30, 8:55 p.m. ET: Facebook responded to our request for comment with the following statement:

“Like most websites, we run numerous tests at any given time to ensure that we’re creating the best experience possible for people on Facebook. These experiments look at aggregate trends of how people interact with the site to inform future product decisions. We do not share this information with anyone outside of Facebook and we are not using this information to target ads.”

BY KURT WAGNEROCT 30, 2013

Find this story at 30 October 2013

copyright http://mashable.com/

What Facebook Collects and Shares

What Facebook could know about you, and why you should care.

Facebook is a resource for opinions and hobbies, celebrities and love interests, friends and family, and all the activities that whirl them together in our daily lives. Much like other social networking sites, Facebook is free except for one thing that all users give up: a certain amount of personal information.

Facebook privacy policy provides extensive information about the use of personal data of registered users. It clearly specifies what personal information is collected, how it is used, parties to whom this information may be disclosed, and the security measures taken to protect the information.

By reading and understanding the privacy policy, a user is able to weigh the risks involved in trusting this popular Web site, before one enters any personal information into its pages or installs its applications.

Information Collected by Facebook
Facebook collects two types of information: personal details provided by a user and usage data collected automatically as the user spends time on the Web site clicking around.

Regarding personal information, the user willfully discloses it, such as name, email address, telephone number, address, gender and schools attended, for example. Facebook may request permission to use the user’s email address to send occasional notifications about the new services offered.

Facebook records Web site usage data, in terms of how users access the site, such as type of web browser they use, the user’s IP address, how long they spend logged into the site, and other statistics. Facebook compiles this data to understand trends for improving the site or making marketing decisions.

Facebook now has fine-grained privacy settings for its users. Users can decide which part of their information should be visible and to whom. Facebook categorizes members of the user’s network as “Friends” and “Friends of Friends,” or a broader group, such as a university or locality, and “Everyone,” which includes all users of the site. The categorization increases the granularity of the privacy settings in a user’s profile.

Children: No one under 13 is permitted to register. Children between 13 and 18 require parental permission before sending personal information over Internet. A policy alone, however, does not stop children from using the site, and parents must be watchful of their children’s online activities in order to enforce these policies.

Facebook stores users’ personal information on secure servers behind a firewall.

Sharing of Information with Third Parties
Facebook does not provide personal information to third parties without the user’s consent. Facebook also limits the information available to Internet search engines. Before accepting third-party services, Facebook makes the third party sign an agreement that holds it responsible for any misuse of personal information. However, advertising by third parties on Facebook can lead to their gaining access to user information, such as IP address or cookie-based web usage information that allows personalization of advertisements.

Precautions for Users
Facebook provides thousands of third-party applications for its users to download. Facebook further personalizes the advertisements of these applications on the user’s profiles. It does this by mining through other sources on the Internet to information about the likings and interests of these users. Sources for such mined data are newspapers, blogs and instant messaging to provide services customized according to the user’s personality. However, because these sources are not affiliated with Facebook, it raises a concern of data mining by these sources.

Facebook does not actually provide a mechanism for users to close their accounts, and thus raises the concern that private user data will remain indefinitely on Facebook’s servers.

Over time, the CEO and Board of Directors of a company change, or the company may even be sold. Under such circumstances, a concern arises about the private information held by the company. Deactivation without deletion of a user’s account implies that the data continue to be present on the servers. If a company is then sold, the data of those users who are currently deactivated may be subject to compromise.

Conclusion
Facebook has an explicitly stated privacy policy. It aims to enhance the social networking experience of users by reducing their concerns about the privacy of their data on the Web site. However, the more the Web site tries to incorporate open innovation by allowing third-party access and other such facilities, the more it puts personal information at risk, thereby increasing the probability of losing the trust of its users.

Find this story at 2014

Copyright © 2003–2012 Carnegie Mellon CyLab

Where Does Facebook Stop and the NSA Begin?

Sometimes it’s hard to tell the difference.

“That social norm is just something that has evolved over time” is how Mark Zuckerberg justified hijacking your privacy in 2010, after Facebook imperiously reset everyone’s default settings to “public.” “People have really gotten comfortable sharing more information and different kinds.” Riiight. Little did we know that by that time, Facebook (along with Google, Microsoft, etc.) was already collaborating with the National Security Agency’s PRISM program that swept up personal data on vast numbers of internet users.

In light of what we know now, Zuckerberg’s high-hat act has a bit of a creepy feel, like that guy who told you he was a documentary photographer, but turned out to be a Peeping Tom. But perhaps we shouldn’t be surprised: At the core of Facebook’s business model is the notion that our personal information is not, well, ours. And much like the NSA, no matter how often it’s told to stop using data in ways we didn’t authorize, it just won’t quit. Not long after Zuckerberg’s “evolving norm” dodge, Facebook had to promise the feds it would stop doing things like putting your picture in ads targeted at your “friends”; that promise lasted only until this past summer, when it suddenly “clarified” its right to do with your (and your kids’) photos whatever it sees fit. And just this week, Facebook analytics chief Ken Rudin told the Wall Street Journal that the company is experimenting with new ways to suck up your data, such as “how long a user’s cursor hovers over a certain part of its website, or whether a user’s newsfeed is visible at a given moment on the screen of his or her mobile phone.”

There will be a lot of talk in coming months about the government surveillance golem assembled in the shadows of the internet. Good. But what about the pervasive claim the private sector has staked to our digital lives, from where we (and our phones) spend the night to how often we text our spouse or swipe our Visa at the liquor store? It’s not a stretch to say that there’s a corporate spy operation equal to the NSA—indeed, sometimes it’s hard to tell the difference.

In light of what we know now, Zuckerberg’s high-hat act has a bit of a creepy feel, like that guy who told you he was a documentary photographer, but turned out to be a Peeping Tom.
Yes, Silicon Valley libertarians, we know there is a difference: When we hand over information to Facebook, Google, Amazon, and PayPal, we click “I Agree.” We don’t clear our cookies. We recycle the opt-out notice. And let’s face it, that’s exactly what internet companies are trying to get us to do: hand over data without thinking of the transaction as a commercial one. It’s all so casual, cheery, intimate—like, like?

But beyond all the Friends and Hangouts and Favorites, there’s cold, hard cash, and, as they say on Sand Hill Road, when the product is free, you are the product. It’s your data that makes Facebook worth $100 billion and Google $300 billion. It’s your data that info-mining companies like Acxiom and Datalogix package, repackage, sift, and sell. And it’s your data that, as we’ve now learned, tech giants also pass along to the government. Let’s review: Companies have given the NSA access to the records of every phone call made in the United States. Companies have inserted NSA-designed “back doors” in security software, giving the government (and, potentially, hackers—or other governments) access to everything from bank records to medical data. And oh, yeah, companies also flat-out sell your data to the NSA and other agencies.

To be sure, no one should expect a bunch of engineers and their lawyers to turn into privacy warriors. What we could have done without was the industry’s pearl-clutching when the eavesdropping was finally revealed: the insistence (with eerily similar wording) that “we have never heard of PRISM”; the Captain Renault-like shock—shock!—to discover that data mining was going on here. Only after it became undeniably clear that they had known and had cooperated did they duly hurl indignation at the NSA and the FISA court that approved the data demands. Heartfelt? Maybe. But it also served a branding purpose: Wait! Don’t unfriend us! Kittens!

O hai, check out Mark Zuckerberg at this year’s TechCrunch conference: The NSA really “blew it,” he said, by insisting that its spying was mostly directed at foreigners. “Like, oh, wonderful, that’s really going to inspire confidence in American internet companies. I thought that was really bad.” Shorter: What matters is how quickly Facebook can achieve total world domination.

Maybe the biggest upside to l’affaire Snowden is that Americans are starting to wise up. “Advertisers” rank barely behind “hackers or criminals” on the list of entities that internet users say they don’t want to be tracked by (followed by “people from your past”). A solid majority say it’s very important to control access to their email, downloads, and location data. Perhaps that’s why, outside the more sycophantic crevices of the tech press, the new iPhone’s biometric capability was not greeted with the unadulterated exultation of the pre-PRISM era.

The truth is, for too long we’ve been content to play with our gadgets and let the geekpreneurs figure out the rest. But that’s not their job; change-the-world blather notwithstanding, their job is to make money. That leaves the hard stuff—like how much privacy we’ll trade for either convenience or security—in someone else’s hands: ours. It’s our responsibility to take charge of our online behavior (posting Carlos Dangerrific selfies? So long as you want your boss, and your high school nemesis, to see ‘em), and, more urgently, it’s our job to prod our elected representatives to take on the intelligence agencies and their private-sector pals.

The NSA was able to do what it did because, post-9/11, “with us or against us” absolutism cowed any critics of its expanding dragnet. Facebook does what it does because, unlike Europe—where both privacy and the ability to know what companies have on you are codified as fundamental rights—we haven’t been conditioned to see Orwellian overreach in every algorithm. That is now changing, and both the NSA and Mark Zuckerberg will have to accept it. The social norm is evolving.

—By Monika Bauerlein and Clara Jeffery | November/December 2013 Issue

Find this story at November/December 2013

Copyright ©2014 Mother Jones and the Foundation for National Progress.