Tag Archives: privacy

Data collection brings more benefits than loss, experts say

WASHINGTON – You’re probably one of the 91 percent of American adults who think they’ve lost control over how their personal information is collected and used by companies (according to a Pew Research study in early 2015). But big data collection brings benefits that outweigh the potential downsides, contended Ben Wittes, a senior fellow at the Brookings Institution, in a panel discussion at the Capital Visitor Center last Thursday.

Consumers’ concern about online privacy are at all-time high due to the emerging technologies – for instance, e-commerce and mobile devices– which collects a big chunk of consumer data, the Pew Research study says.

However, people who worry about “privacy eroding into the river and being gone forever,” added Wittes, ignore how those benefits actually increase privacy.

The rise of online sales has meant you can mail-order products that might be too embarrassing to buy in person, Wittes added. “Without looking at somebody in the eye, without confessing the interest in this subject, you get what you want.”

Because all e-books look the same on an e-reader, for instance, you can read Fifty Shades of Grey on your Kindle without shame—which may explain why the e-version of this book has outsold its printed version.

The value of the privacy of those purchases, Wittes argued, outweighs the value of the data given for them—like email, credit card numbers, browsing history, personal preferences, and location-based information.

Wittes suggested changing vocabulary that consumers use to describe the benefits they get with giving up some personal information. It’s not only “convenience,” he said, “it’s also privacy benefits.”

Joshua New, policy analyst at the Information Technology Innovation Foundation, said data collection also brings economic benefits to consumers.

He cited car insurance as an example. Instead of deciding your insurance premium based on broad factors – for instance, age, gender, neighborhood, drivers could use data to prove that they are cautious and don’t brake rapidly to get lower premiums even they are in the “high-risk section” based on traditional measurements, New said.

People who strive for online privacy should be aware that there is a cost to it. Adam Thierer, a senior research fellow at George Mason University, said it’s not impossible for people to protect their privacy if they don’t mind losing the benefits of giving up their data.

“Companies can offer paid options where user information won’t be collected,’ Thierer said. “But at the moment, I don’t think many people will pay for their privacy.”

A balance between consumer privacy and technology innovation is what the Federal Trade Commission is pursuing. Totally prohibiting data collection, which will create barriers for breakthrough innovations, is definitely not the solution.

“We should definitely limit the use of data,” said Federal Trade Commission member Maureen Ohlhausen, “but not limit the collection of data.”


Published in conjunction with PC World Logo

A How-to Guide for Encrypting and Protecting Digital Communications using PGP

BY AARON RINEHART FOR THE MEDILL NSJI

“Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on. Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”

— Edward Snowden, answering questions live on the Guardian’s website

From surveillance to self-censorship, journalists are being subjected to increased threats from foreign governments, intelligence agencies, hacktivists and other actors who seek to limit or otherwise manipulate the information they possess. The notorious Edward Snowden stressed to the New York Times in an encrypted interview the importance of encryption for journalists: “It should be clear that [for an] unencrypted journalist to source communication is unforgivably reckless.” If journalists are communicating insecurely and without encryption, they put themselves, their sources and their reporting at unnecessary levels of risk. This sort of risky behavior may send the wrong message to potential key sources, like it almost did when Glenn Greenwald almost missed out on the landmark story of National Security Agency surveillance set out in the Snowden documents because he wasn’t communicating via encryption.

The aim of this how-to guide is to provide a clear path forward for journalists to protect the privacy of their reporting and the safety of their sources by employing secure communication methodologies that are proven to deliver.

How and When Should I Encrypt?

Understanding the basics of encryption and applying these tools and techniques to a journalist’s reporting is rapidly becoming the new normal when conducting investigative research and communicating with sources. It is, therefore, just as vital to know when and how to encrypt sensitive data as it is to understand the tools needed to do it.

In terms of when to encrypt, confidential information should be both encrypted “At-Rest” and “In-Transit” (or “In-Motion”). The term data-at-rest refers to data that is stored in a restful state on storage media. An example of this is when a file is located in a folder on a computer’s desktop or an email sitting in a user’s in-box. The term data-in-transit describes the change of data from being in a restful state to being in motion. An example of data-in-transit is when a file is being sent in an email or to a file server. With data-in-transit, the method of how the data is being transmitted from sender to receiver is the primary focus — not just the message. This is illustrated more effectively in the example of using a public wireless network in that if the network is not setup to use strong encryption to secure your connection, it may be possible for someone to intercept your communications. The use of encryption in this use case demonstrates how sensitive data, when not encrypted while in transit, can be compromised.

Methods for protecting Data-at-Rest and Data-in-Transit

Data-at-Rest can be protected through the following methods.

One suggested methodology is to encrypt the entire contents of the storage media, such as a hard drive on a computer or an external drive containing sensitive material. This method provides a higher level of security and can be advantageous in the event of a loss or theft of the storage media.

A second method – which should be ideally combined with the first method – is to encrypt the files, folders and email containing sensitive data using Pretty Good Privacy (or PGP) encryption. PGP encryption also has the added benefit of protecting data-in-transit, since the data stays encrypted while in motion

The name itself doesn’t inspire much confidence, but PGP or “Pretty Good Privacy” encryption has held strong as the preferred method by which individuals can communicate securely and encrypt files.

The concepts surrounding PGP and getting it operational can often seem complex, but this guide aims to make the process of getting started and using PGP clearer.

PGP Essentials: The Basics of Public and Private Keys

Before diving too deeply into the software setup needed to use PGP, it is important to understand a few key fundamentals of how PGP encryption works.

Within PGP and most public-key cryptography, each user has two keys that form something called a keypair. The reason the two keys are referred to as a keypair is that the two are mathematically linked.

The two keys used by PGP are referred to as a private key, which must always be kept secret, and a public key, which is available for distribution to people with whom the user chooses to communicate. Private keys are predominantly used in terms of email communications to decrypt emails from a sender. Public keys are designed for others to use to encrypt mail to the user.

In order to send someone an encrypted email, the sender must first have that recipient’s public key and have established a trusted relationship. Most encryption systems in terms of digital communications are based on establishing a system of trust between communicating parties. In terms of PGP, exchanging public keys is the first step in that process.

Key Management: Best Practices

Regardless of whether the user is using the OpenPGP standard with GNU Privacy Guard (GPG) or another derivative, there are a few useful points to consider in terms of encryption key management.

Private Keys are Private!

The most important concept to remember is that private Keys should be kept private. If someone compromises the user’s private key, all communications would be trivial to intercept.

Generating Strong Encryption Keys

When generating strong private/public keypairs there are some important things to remember:

  • Utilize Large Key Sizes and Strong Hashing Algorithms

It is recommended that when generating a keypair to make the key size at least 4096bit RSA with the SHA512 hashing algorithm. The encryption key is one of the most important pieces in terms of how the encryption operations are executed. The key is provided to present a unique “secret” input that becomes the basis for the mathematical operations executed by the encryption algorithm. A larger key size increases the strength of the cryptographic operations as it complicates the math due to the larger input value. Thus, it makes the encryption more difficult to break.

  • Set Encryption Key Expiration Dates

Choose an expiration date less than two years in the future.

  • Strong Passphrase

From a security perspective, the passphrase is usually the most vulnerable part of the encryption procedure. It is highly recommended that the user choose a strong passphrase.

In general terms, the goal should be to create a passphrase that is easy to remember and to type when needed, but very hard for someone else to guess.

A well-known method for creating strong, but easy to remember, passwords is referred to as ‘diceware,’. Diceware is a method for creating passphrases, passwords and other cryptographic variables using an ordinary die from a pair of dice as a random number generator. The random numbers generated from rolling dice are used to select words at random from a special list called the Diceware Word List. The recommendation when using diceware to create a PGP passphrase is to use a minimum of six words in your passphrase. An alternative method for creating and storing strong passphrases is to use a secure password manager such as KeePass.

Backing Up Private Keys

Although a journalist may be practicing good security by encrypting sensitive information, it would be devastating if a disruptive event – such as a computer hardware failure – caused them to lose their private key, as it would be near-impossible to decrypt without it. When backing up a private key, it is important to remember that is should only be stored on a trusted media, database or storage drive that is preferably encrypted.

Public Key Servers

There are several PGP Public Key servers that are available on the web. It is recommended for journalists upload a copy of their public keys to public key servers like hkp://pgp.met.edu to open their reporting up to potential sources who wish to communicate securely. By uploading a copy of the public key to the key server, anyone who wants to communicate can search by name, alias or email address to find the public key of the person their looking for and import it.

Validating Public Keys: Fingerprints

When a public key is received over an untrusted channel like the Internet, it is important to authenticate the public key using the key’s fingerprint. The fingerprint of an encryption key is a unique sequence of letters and numbers used to identify the key. Just like the fingerprints of two different people, the fingerprints of two different keys can never be identical. The fingerprint is the preferred method to identify a public key. When validating a public key using its fingerprint, it is important to validate the fingerprint over an alternative trusted channel.

For example, if a journalist receives a public key for a source on a public key server, it is important for them to validate the key by either communicating in person, calling them over secure phone or via an alternate communication channel. The purpose of key validation is to guarantee that the person being communicated with is the key’s true owner.

Adding PGP Public Key Fingerprint to Twitter

For journalists, its important to ensure sources can quickly validate their public keys that they retrieve from Public Key servers. A common method to convey a PGP public key to the public is to tweet the public key fingerprint and link to that tweet in to the bio.

Another method is to link directly to the PGP key on a public keyserver (like MIT’s) and to provide a copy of the key fingerprint in the bio, like this example with Barton Gellman.

1

GNU Privacy Guard: Encrypting Email with GPG

Hold up, stop and wait a minute: I thought the topic of discussion was PGP.

Is GPG a typo?

No. In fact, GPG (or the GNU Privacy Guard) is the GPL-licensed alternative to the PGP suite of encryption software. Both GPG and PGP utilize the same OpenPGP standard and are fully compatible with one another.

Getting Started with GPG: From Setup to Secure

A Step-by-Step Guide to Setting up GPGTools on Apple OSX

Tutorial Objectives

  • How to install and configure PGP on OS X
  • How to use PGP operationally

Install the GPGTools GPG Suite for OS X

This step is simple. Visit the GPGTools website and download the GPG Suite for OS X. Once downloaded, mount the DMG and run the “Install.”

2

Select all modules and, then, press “Install.”

3

Generating a New PGP key

When the installer completes, a new app called “GPG Keychain Access” will launch. A small window will pop up immediately and say: “GPG Keychain Access would like to access your contacts.” Press “OK.”

4

After pressing “OK,” a second window will pop up that says “Generate a new keypair.” Type in your name and your email address. Also, check the box that says “Upload public key after generation.” The window should look like this:

5

Expand the “Advanced options” section. Increase the key length to 4096 for extra security. Reduce the “Expiration date” to 1 year from today. The window should look like this:

6

Press “Generate key.”

After pressing “Generate key,” the “Enter passphrase” window will pop up.

Okay, now this is important

The Importance of Good Passphrases

The entire PGP encryption process will rest on the passphrase that is chosen.

First and foremost: Don’t use a passphrase that other people know! Pick something only you will know and others can’t guess. Once you have a passphrase selected, don’t give it to other people.

Second, do not use a password, but rather a passphrase — a sentence. For example, “ILoveNorthwesternU!” is less preferable than “I graduated from Northwestern U in 1997 and it’s the Greatest U on Earth?!” The longer your passphrase, the more secure your key.

Lastly, make sure your passphrase is something you can remember. Since it is long, there is a chance that you might forget it. Don’t. The consequences to that will be dire. Make sure you can remember your passphrase. In general there are several methodologies by which you can employ to store your passphrase to ensure its safekeeping. One such method would be to make use of a password manager like “KeePass”, an open source encrypted password database that securely stores your passwords.

Once you decide on your passphrase, type it in the “Enter passphrase” window. Turn on the “Show typing” option, so you can be 100% sure that you’ve typed in your passphrase without any spelling errors. When everything looks good, press “OK:”

7

You will be asked to reenter the passphrase. Do it and press “OK:”

8

You will then see a message saying, “We need to generate a lot of random bytes…” Wait for it to complete:

9

Your PGP key is ready to use:

PGPkeyready

Setup PGP Quick Access Shortcuts

Open System Preferences, select the “Keyboard” pane and go to the “Shortcuts” tab.

On the left hand side, select “Services.” Then, on the right, scroll down to the subsection “Text” and look for a bunch of entries that start with “OpenPGP:”

Go through each OpenPGP entry and check each one.

10

Bravo! You’re now done setting up PGP with OpenGPG on OS X!

Now, let’s discuss how to use it.

How to send a secure email

To secure an email in PGP, you will sign and encrypt the body of the message. You can just sign or just encrypt, but combining both operations will result in optimum security.

Conversely, when you receive a PGP-secured email, you will decrypt and verify it. This is the “opposite” of signing and encrypting.

Start off by writing an email:

  1. Select the entire body of the email and “Right Click and Go to Services -> OpenPGP: Sign” to sign it.
  1. Open the GPG Keychain Access app. Select “Lookup Key” and type in the email address of the person you are sending your message to. This will search the public keyserver for your source’s PGP key.

If your source has more than one key, select his most recent one.

You will receive a confirmation that your source’s key was successfully downloaded. You can press “Close.”

You will now see your source’s public key in your keychain.

  1. You can now quit GPG Keychain Access and return to writing the email.
  1. Select the entire body of the email (everything, not just the part you wrote) and “Right Click and Go to Services -> OpenPGP: Encrypt” to encrypt it. A window will pop up, asking you who the recipient is. Select the source’s public key you just downloaded and press “OK.”
  1. Your entire message is now encrypted! You can press “Send” safely.

As a reminder, you will only need to download your source’s public key once. After that, it will always be available in your keychain until the key expires.

How to receive a secure email

With our secure message sent, the recipient will now want to decipher it. For the sake of this step, I will pretend that I am the recipient.

I have received the message:

email

  1. Copy the entire body, from, and including, “—–BEGIN PGP MESSAGE—“, to, and including, “—–END PGP MESSAGE—“. Open a favorite text editor, and paste it:

email2

  1. Select the entire text, “Right click and select Services – OpenPGP – Decrypt” – to decrypt the message. You will immediately be prompted for your PGP passphrase. Type it in and press “OK:”

email3

  1. You will now see the decrypted message!

email4Next, you can verify the signature.

  1. Highlight the entire text and “Right Click and Go to Services -> OpenPGP: Verify”. You will see a message confirming the verification.
  2. Press “OK.”

Setting up GPG4Win on Windows

Tutorial Objectives

  • How to install and configure PGP on a PC
  • How to use PGP operationally

Installing the GPG4Win GPG Suite

This step is simple.

  1. Visit the GPG4win website and download the GPG Suite for OS X.
  2. Once downloaded, run the “Install”.

GPG1

Download Install File

GPG2

  1. Double-Click on the downloaded file to begin the installation wizard.
  2. Select the components to install, but keep it simple by installing all components except for Claws Mail.
  3. Select “Next”

A brief description of each component:

gpg3

  • Kleopatra – a certificate manager
  • GPA – another certificate manger
  • GpgOL  – a plugin for Outlook
  • GPGEX – an extension for Windows Explorer
  • Claw-Mail – a lightweight email program with GnuPG support built-in
  • Gpg4win Compendium  – a manual
  1. Select desired preferences and click “Nextgpg5

5. Click “Finish” to exit the install wizard.

Setting up GPG4Win using Thunderbird and Enigmail

Enigmail, a play on words originating from the Enigma machine used to encrypt secret messages during World War I, is a security extension or add-on to the Mozilla Thunderbird Email software. It enables you to write and receive email messages signed and/or encrypted with the OpenPGP standard. Enigmail provides a more simplified method for sending and receiving encrypted email communications. This step-by-step guide will help you get started installing and configuring the extension.

  1. Open Thunderbird and navigate to the Add-Ons Manager under the “Tools” menu.
  2. In the search dialog box, type “Enigmail.”Now, a list of Add-Ons will be available.
  3. Select the Enigmail add-on from the list.
  4. Click the “Install” button.

ENIG1

 

  1. There should now be a message indicating, “Enigmail will be installed after you restart Thunderbird.” Proceed with the installation by Clicking on the words “Restart Now

ENIG2

 

  1. After the Thunderbird application restarts, Enigmail should look like the image below. Proceed with configuring the add-on by Selecting Enigmail from the list.

ENIG3

  1. Select “I prefer a standard configuration (recommended for beginners)” and click “Next”.

Generating a Public/Private Keypair.

  1. Select the Account to generate the keys for.
  2. Enter in a strong passphrase
    • If the passphrase isn’t strong enough, the Passphrase quality meter will indicate that with a red- or yellow-colored bar (vs. the green one shown in the image below).
  3. Re-enter the strong passphrase to confirm.

keypair1

 

The Key Generation Process will generate a series of data based on random activity and assign it to the randomness pool for which to generate the keypair.

keypair2

 

 

  1. Save the revocation key to a trusted and safe, separate device (storage media)!

The revocation certificate can be used to invalidate a public key in the event of a loss of a secret (private) key.

keypair3

 

Key Management / View Key in Enigmail

management1

  1. Change the expiration date (suggested <2 years)

management2

 

  1. Upload Key To Public Keyserver (like hkp://pgp.mit.edu).

Public Keyserver Lookup

Look up the Public Keys of other people on public keyserver directly from within Enigmail.

  1. Select “Search for Keys” from the “Keyserver” dropdown menu.
  2. Enter in the Email Address or <FirstName><space><LastName> of the persons name that is being looked up.

A good test for this function is to try searching for Glenn Greenwald.

management3

 

Notice how many active public keys Glenn Greenwald has. This could be intentional, but it can also happen when setting up keys on a new device or email client and 1. Forgot the private key passphrase 2. Lost the key revocation file or forgotten the passphrase to unlock it.

The problem is that anyone contacting the user for the first time will have to figure out which key is the correct one to use. It also becomes a security risk because any one of those unused, but active, keys could be compromised, and result in adversaries accessing communications.

MORAL Of THE STORY: Set an expiration date, manage the revocation key file and manage passphrases.

Revoking a Key

  1. Right-click on the key and click on Key Properties.

revoke1

 

  1. At the bottom of the window Click on the ”Select Action” dropdown menu and Select “Revoke Key.”

revoke2

  1. A dialog box will pop up asking for the Private Key’s unique passphrase. Enter the passphrase for the key that is being revoked.

revoke3

  1. Once completed, the user will receive an ‘Enigmail Alert’ indicating that the key has been revoked. The alert warns the user that ‘if your key is available on a keyserver, it is recommended to re-upload it, so that others can see the revocation’. It is important to update the public keyservers to ensure that sources are aware of revoked keys and new keys on each account.

revoke4

 

Operationalizing GPG: A Pragmatic Approach

What do encrypt, decrypt, sign, and verify mean?

  • Encrypt takes the user’s secret key and the recipient’s public key, and jumbles a message. The jumbled text is secure from prying eyes. The sender always encrypts.
  • Decrypt takes an encrypted message, combined with the user’s secret key and the sender’s public key, and descrambles it. The recipient always decrypts. Encrypt and decrypt can be thought of as opposites.
  • Signing a message lets the receiver know that the user (the person with the user’s email address and public key) actually authored the message. Signing also provides additional cryptographic integrity by ensuring that no one has interfered with the encryption. The sender always signs a message.
  • Verifying a message is the process of analyzing a signed message to determine if the signature is true. Signing and verifying can be thought of as opposites.

When should someone sign a message? When should they encrypt?

If it is unnecessary to sign and encrypt every outgoing email, when should the user sign? And when should the user encrypt? And when should the user do nothing?

There are three sensible choices when sending a message:

  • Do nothing. If the contents of the email are public (non-confidential), and the recipient does not care whether the user or an impostor sent the message, then do nothing. The user can send the message as they’ve sent messages their entire life: in plain text.
  • Sign, but don’t encrypt. If the contents of the email are public (non-confidential), but the recipient wants assurance that the suspected sender (and not an impostor) actually sent the message, then the user should sign but not encrypt. Simply follow the tutorial above, skipping over the encryption and decryption steps.
  • Sign and encrypt. If the contents of the email are confidential, sign and encrypt. It does not matter whether the recipient wants assurance that the user sent the message; always sign when encrypting.

For a majority of emails that the user may send, encryption is just not always necessary. The remainder of the time, the user should sign and encrypt.

Whenever there is confidential information — such as sensitive reporting information, source address and name information, credit card numbers, bank numbers, social security numbers, corporate strategies or intellectual property — users should sign and encrypt. In terms of confidential information, users should err on the side of caution and sign and encrypt gratuitously rather than doing nothing and leaking sensitive information. As for the third option, users can sign, but do not encrypt.

Best Practices in Information Security

Despite best practices regarding the operational usage of PGP encryption, a disregard for the fundamentals of information security can still put a journalist’s communications in peril. It doesn’t matter how strong the encryption is if the user’s laptop has already been compromised, and is only a matter of time before the journalists’ encrypted communication method is in jeopardy.

Below is a short list of some high-level information security best practices. For more information on this subject, see Medill’s National Security Zone Digital Security Basics for Journalists.

  • Good password management

Journalists should not only create strong passwords, but also avoid using the same password for anything else. Consider using a password manager.

  • Keep software up to date.

Update software frequently. This helps thwart a majority of attacks to your system.

  • End-Point Security Software

Make use of antivirus and anti-malware software.

  • Be wary of odd emails and accompanying attachments.

When in doubt, don’t click. The goal of most phishing attacks is to either get you to download a file or send you to a malicious website to steal your username and password. If it doesn’t seem right or doesn’t make sense, try reaching out to the person via an alternate communication method before clicking.

  • Stay away from pirated software.

Nothing is truly free, as these software packages can often come with unintended consequences and malicious code packaged with them.

GPG Alternatives

  • CounterMail: a secure online email service utilizing PGP without the complexity of complicated key management.
  • DIMEDark Internet Mail Environment: a new approach and potential game-changer to secure and private email communications.

Additional Resources

Glossary of Terms

Keyword Definition
Ciphertext Ciphertext is encrypted text. Plaintext is what you have before encryption, and ciphertext is the encrypted result. The term cipher is sometimes used as a synonym for ciphertext, but it more properly means the method of encryption rather than the result.
Data-at-Rest Data at rest is a term that is sometimes used to refer to all data in computer storage while excluding data that is traversing a network or temporarily residing in computer memory to be read or updated.
Data-In-Transit Data in Transit is defined as data no longer at a restful state in storage and in motion.
Digital Signature A digital signature is a mathematical technique used to validate the authenticity and integrity of a message, software, or digital document.
Encryption Encryption is the conversion of electronic data into another form, called ciphertext, which cannot be easily understood by anyone except authorized parties.
Fingerprint In public-key cryptography, a public key fingerprint is a short sequence of bytes used to authenticate or look up a longer public key. Fingerprints are created by applying a cryptographic hash function to a public key.
Key In cryptography, a key is a variable value that is applied using an algorithm to a string or block of unencrypted text to produce encrypted text, or to decrypt encrypted text.
Password Manager A password manager is a software application that helps a user store and organize passwords. Password managers usually store passwords encrypted, requiring the user to create a master password; a single, ideally very strong password which grants the user access to their entire password database.
Private Key In cryptography, a private or secret key is an encryption/decryption key known only to the party or parties that exchange secret messages. In traditional secret key cryptography, a key would be shared by the communicators so that each could encrypt and decrypt messages. The risk in this system is that if either party loses the key or it is stolen, the system is broken. A more recent alternative is to use a combination of public and private keys. In this system, a public key is used together with a private key.
Public Key In cryptography, a public key is a value provided by some designated authority as an encryption key that, combined with a private key derived from the public key, can be used to effectively encrypt messages and digital signatures.

Term Definitions provided by TechTarget.com, Webopedia.com, and Wikipedia.org

The drone debate: Does the coming swarm of flying gadgets require new privacy laws?

CyPhy drone (Image courtesy of CyPhy Works)

CyPhy drone (Image courtesy of CyPhy Works)

Noticeably missing from the recommendations unveiled earlier this year were any privacy oversights. For the Electronic Privacy Information Center (EPIC), the plaintiffs in the suit against the FAA, that was inexcusable.

The advocacy group’s website site is full of unnerving facts about camera-wielding drones. They can be equipped with facial recognition, license plate scanners, the capacity to track multiple targets, and the ability to operate at distances and heights making them impossible to detect. Drones are “designed to undertake constant, persistent surveillance to a degree that former methods of video surveillance were unable to achieve,” according to EPIC.

The courts are not the only avenue to affect policy change at the FAA. Public comment on the framework will be accepted until this Friday.

But many experts argue that the FAA isn’t the governing body that should be charged with ensuring drones don’t violate privacy. What’s more, others – chiefly drone-makers and their advocates – question whether unmanned aerial vehicles, or UAVs, even require a new set of privacy standards, saying that existing laws are already enough.

In a privacy impact assessment issued alongside the proposed framework, the FAA stated that while it “acknowledges that privacy concerns have been raised about unmanned aircraft operations … these issues are beyond the scope of this rulemaking.”

While some privacy advocates are worried that the omission may allow for invasive surveillance from commercial or government drones operating inside the US, the drone community said those concerns are more of a red herring than anything else.

“The FAA has wisely backed off all privacy issues [because]there’s no need for a new federal privacy bureaucracy [when]states already have protections in place,” says Charles Tobin, a privacy rights lawyer at the law firm Holland & Knight, who represents a coalition of media outlets advocating for drone usage for the purposes of journalism.

“The laws that are on the books are all technology agnostic. They apply to computers, they apply to still cameras, they apply to wireless microphones, they apply to video cameras … and there’s no reason that they can’t be applied – as already written – to UAV,” he says.

Relying on existing legal protections should be the obvious choice, says Brendan Schulman, head of the Unmanned Aircraft Systems practice at the law firm Kramer Levin in New York.

Nicknamed “the Drone Lawyer,” Mr. Schulman says that “if the concern is physical intrusion or inappropriate photographs, state law governing offenses such as trespass, stalking, peeping or unlawful surveillance … apply.” That means that what people are most fearful of – being stalked, harassed, or surveilled by a drone, or being victimized by a peeping tom behind a drone – are already acts bound by law.

Simply put: the states have things covered, he says.

REGULATORS LAG BEHIND TECHNOLOGY
A presidential memorandum issued the same day as the FAA’s proposed regulations relays the responsibility to “develop a framework regarding privacy, accountability, and transparency for commercial and private [unmanned aerial systems]use” to the Department of Commerce. The memo states that the department must initiate a “multistakeholder engagement process” within 90 days of the memo’s release – so it must begin work by mid-May.

But government trying to regulate a specific piece of technology is not the best approach, says Matt Waite, professor of journalism and founder of the Drone Journalism Lab at the University of Nebraska-Lincoln, which explores the ways in which drones can be used to further journalistic aims.

“As we are already seeing, the government lags way behind technology when it comes to laws that would deal with that technology. It’s taken the FAA a long time to come up with [proposed]rules for these drones and they’re flying around right now. They’re being used for commercial purposes even though the FAA says, ‘No, you can’t do that.’ ”

Mr. Waite says it’s important to determine exactly what people can’t do – what actions need to be stopped. “We need to start thinking about what we consider a reasonable expectation of privacy in our modern times. And if that’s not allowing [me to]photograph [someone]streaking in their backyard, then that’s great. We can say I can’t do that. But it shouldn’t matter how I do that, [just that]you don’t want me to do it.”

It’s about recognizing that once privacy has been violated, how it was violated is no longer important, says Helen Greiner, chief executive officer of Massachusetts robotics and drone company CyPhy Works. She says that although she understands the privacy concerns related to the commercial use of drones, those concerns are often misdirected: “It’s not a drone issue. It’s a camera issue. In that way, it’s kind of a red herring.”

“You need to go to the real issue, which is pointing cameras at things they shouldn’t be pointed at,” Ms. Greiner says. “And if we’re going to talk about privacy with cameras, it should be for all cameras … whether they’re on a drone or a balloon.”

She says she doesn’t worry that public fear might hurt her business because the drones sold by CyPhy Works are used to perform specific commercial functions. “They may be used to survey a property or a facility, for example,” but they’re not being used to capture footage surreptitiously, Greiner says.

“I believe privacy is an important issue and that it should be regulated, but rules already exist,” she explains. She says it’s unlikely that fears related to the perceived loss of privacy will bog down final passage of FAA regulations – something she’s anxiously awaiting: “It might be wishful thinking, but I don’t foresee a tightening in terms of the finalized regulations.”

THE CASE FOR PRIVACY POLICIES
A public commentary period on the proposed regulations expires Friday, but there’s no firm deadline for when the FAA must have finalized regulations in place. Experts think it could take two years, possibly longer – which means the waiting game has only begun. It also means that commercial drone use will remain technically illegal for the duration, outside of a handful of exemption-type permissions granted by the FAA.

Amie Stepanovich, senior policy counsel for privacy advocacy group Access Now, says that there’s room for improvement when it comes to ensuring personal privacy. That’s because drone technology is in a league of its own: “Drones have [the]capacity to bring a bunch of different surveillance technologies onto a singular platform and to reach into areas that other vehicles have not been able to get to.”

Ms. Stepanovich says that limitations should be put in place to restrict the ways in which government agencies can use drone technology for the purpose of surveillance. “We need things that will, for example, protect users’ location information from being collected and tracked,” she says. “It comes back to tracking people over time without a warrant and being able to pinpoint their exact location. … and we need to make sure that that information is adequately protected.”

But she’s also a fan of technology agnosticism. She says that whatever restrictions are put in place, they should not be drawn up as drone-specific. “There are several other different kinds of technologies that are coming out,” she says, referring to Stingray trackers that are now being used by law enforcement agencies to gather data from cellphones.

The presidential memo issued in conjunction with the FAA’s proposal states that agencies must “comply with the Privacy Act of 1974, which, among other things, restricts the collection and dissemination of individuals’ information that is maintained in systems of records, including personally identifiable information.”

Although the White House’s assurance that government agencies will be held accountable to legacy privacy standards is a start, Stepanovich recommends further attribution and transparency.

“The FAA has a publicly accessible database of who is able to fly airplanes in any specific geographic area in the United States. But they haven’t made a similar commitment to do that for drone operators,” Stepanovich says. She calls that a double standard.

People won’t know which agency, company, or person is behind the remote control of the drone flying over their homes, Stepanovich says. “So the FAA definitely has a role to play in protecting privacy,” she says.

Stepanovich suggests the FAA incorporate a registry: “We’re talking about transparency, requiring that drone users register what technology they are deploying on their drones, and what capacity these drones will have. This just gets at making sure people are aware of what’s going on in their own area.”

Cracking the code: Workshop gives journalists a crash course in encryption

  • TestBed's Aaron Rinehart lectures to seminar attendees prior to the hands-on portion of the day on April 3, 2015. (Jennifer-Leigh Oprihory/MEDILL NSJI)

WASHINGTON — The minds behind TestBed, Inc., a Virginia-based IT consulting firm specializing in IT planning, analytics, testing, prototyping and business advice for the public and private sectors, gave journalists a crash course in digital safety and encryption techniques at an April 3 seminar in Washington.

The daylong event, “Cyber Security Skill Workshop for Journalists: Sending Secure Email,” was co-sponsored by the Medill National Security Journalism Initiative and the Military Reporters & Editors Association, and held in the Medill Washington newsroom.

The seminar began with an introductory lecture on cybersecurity basics and common misconceptions about online privacy and security. Security-related superstitions, such as the idea that browsing in so-called “incognito” or “invisible” modes will keep your digital whereabouts truly hidden, were promptly dispelled.

TestBed’s Aaron Rinehart and David Reese then transformed the event into a hands-on lesson in PGP – an acronym for “Pretty Good Privacy” – as well as understanding other aspects of digital fingerprints (including how to create a public key, how to register it in the Massachusetts Institute of Technology’s PGP directory so that you are more widely contactable by those in the encryption know and how to revoke (or deactivate) a key for security reasons.

The program also included a brief introduction to the Tor network, a group of volunteer-operated servers that allows people to improve their privacy and security on the Internet. Tor, originally developed by the U.S. Navy, hides the route taken from a computer’s IP address to its eventual browsing destination.

Learn how Tor works via Medill reporter William Hicks’ helpful primer and infographic here.

When asked for the top three lessons he hoped attendees would take away from the event, Rinehart emphasized the importance of “good key management,” or not sharing your private PGP key with anyone, operating “under good security practices”(such as updating software and antivirus programs) and making email encryption a regular habit.

“Don’t compromise convenience for security,” Rinehart said in a post-workshop interview. “Try to make this something you can use everyday.”

The event drew a mix of reporters, security experts and students, which included military veterans and defense journalists.

Northwestern University in Qatar journalism student James Zachary Hollo attended the event to research encryption resources available for foreign correspondents and to report on the workshop for the Ground Truth Project in Boston, where he is currently completing his Junior Residency.

Hollo said the seminar gave him a better understanding of how to use PGP.

“I had sort of experimented with it before I came here, but this gave me a much better and deeper understanding of it, and I got to sort of refine my ability to use it more,” he said.

Hollo said he was surprised that many attendees came from military service or military reporting backgrounds, since, in his view, “one of the blowbacks against the NSA story [involving whistleblower Edward Snowden] was that it’s like reporting is like betraying your country.”

 

Minimizing your digital trail

WASHINGTON — In popular culture, going “off the grid” is generally portrayed as either unsustainable or isolated: a protagonist angers some omniscient corporate or government agency and has to hole up in a remote cabin in the woods until he can clear his name or an anti-government extremist sets up camp, also in the middle of nowhere, living off the land, utterly cut off from society at large.

But is there a way to live normally while also living less visibly on the grid? What steps can you take to reduce your digital footprint that don’t overly restrict your movements?

What is a digital footprint?

Your digital footprint is the data you leave behind when you use a digital service—browse the web, swipe a rewards card, post on social media. Your digital footprint is usually one of two classifications: active or passive.

Your active digital footprint is any information you willingly give out about yourself, from the posts you put up on Facebook to the location information you give to your local mass transit system when you swipe your transit pass.

By contrast, your passive digital footprint is information that’s being collected about you without your express knowledge or authorization, for example, the “cookies” and “hits” saved when you visit a website. When you see personalized ads on Google, for example, those are tailored to you through collection of your personal preferences as inferred through collection of your passive digital footprint.

To assess my digital footprint, I looked through my wallet, my computer and my phone.

The footprint in your wallet

First, the wallet: I have several rewards cards, each representing a company that has a record of me in its database that shows how often I shop and what I buy, which is linked to my name, address, email and birthday—plus a security question in case I forget my password, usually my mother’s middle name.

While I would consider this information fairly benign—they don’t have my credit card information or my Social Security number—these companies can still make many inferences about me from my purchases. CVS, for example, could probably say fairly accurately if I’m sick based on my purchase of medications, whether I’m sexually active based on birth control purchases and any medical conditions I may have based on my prescription purchases.

If I wanted to minimize my digital footprint, I could terminate all my rewards accounts and refrain from opening any more. For me, though, it’s worth allowing these companies to collect my information in order to receive the deals, coupons and specials afforded me as a rewards member.

Next up is my transit pass, which is linked to my name, local address and debit card. The transit authority has a record of every time I swipe my way onto a city bus or train, a record of my movements linked to my name.

A minimal-footprint alternative to a transit pass is single-use fare cards. If purchased with cash, they would leave no record of my travels linked to my name. While this, like the rewards cards, is feasible, it’s far less convenient than the pass —so much less so that again I’m willing to compromise my privacy.

My debit card and insurance card are the two highest-value sources of personal information, but both are utterly necessary—living half a country away from my local credit union, I need my debit card to complete necessary transactions. My medical insurance card, relatively useless to identity thieves unless they have an ID with my name on it, does represent another large file in a database with my personal information—doctors’ visits, prescriptions and hospital stays for the past several years. People with just the physical card, not my license or information, can’t do much with that, but if a hacker gets to that information it could be very damaging.

No driver’s license? No credit card?

To minimize my digital footprint, then, I could pare down my wallet to just the absolute necessities—my insurance card, debit card and my license. You didn’t talk about your license

Computer footprint

If I’m guilty of leaving a large digital footprint, all my worst infractions probably happen across the Web.

Between Facebook, Twitter and Pinterest, I’ve broadcast my name, picture, email, hometown and general movements, if not my specific location, on each of those sites. Of the three, Facebook certainly has the most comprehensive picture of my life for the past seven years—where I’ve been, with whom, what I like and what I’m thinking.

If I wanted to take myself as far off the grid as feasible, simply deactivating the accounts wouldn’t work—Facebook keeps all your information there for you to pick up where you left off. You can permanently delete it with no option for recovery, but some information isn’t stored just on your account—messages exchanged with friends, for example, or any information shared with third-party apps.

If you keep using social networking sites, privacy policies change frequently, meaning that even if you choose the most restrictive privacy settings, you often have to go back and re-set them whenever the company changes its policy. Apps complicate things even further, farming out much of your information to third-party companies with different privacy policies.

Even if you’re vigilant about your privacy settings and eschew apps, your profile is only as private as your most public Facebook friend, said Paul Rosenzweig, a privacy and homeland security expert.

When shopping online, it’s important to check the privacy statements and security policies of the companies you’re using. If possible, purchase gift cards to the specific retailer or from credit card companies and use those to shop, so you don’t leave your credit card information vulnerable to breaches like that of Target.

I know that email is not my friend when it comes to online privacy, but I can’t operate without it.  I use Gmail on Google Chrome for my email, so I installed Mymail-Crypt. It’s one of several “pretty good protection,” or PGP, encryption programs. Using it, my messages appear to be a jumbled bunch of letters until the recipient decrypts it using their private key, which I can save to a key server, like the aptly named Keyserver, where it’s searchable by my email or key ID. I can then link to it on my personal profiles such as Facebook or LinkedIn. People can then send an encrypted email to me using my public key that cannot be read without my private key to unlock it. I’ve also started encrypting my G-Chats using Off the Record chat.

Email can be used against you. Phishers have started to send more sophisticated emails imitating individuals or companies you trust in order to convince you to give up information like your social security number or credit card data. Drew Mitnick a junior policy counselor at digital rights advocacy group Access Now, said you need to be vigilant no matter what you’re doing on the internet.

“Ensure that whoever you’re dealing with is asking for appropriate information within the scope of the service,” he said. In other words, Gap shouldn’t be asking for your Social Security number.

To limit cookies and other data collection during your Internet use, you can open incognito windows in Google Chrome. In incognito mode, the pages you view don’t stay in your browser or search histories or your cookie store—though your Internet service provider and the sites you visit still have a record of your browsing.

Finally, encrypt your hard drive. Privacy laws vary from state to state and country to country so the best way to ensure that you’re protected no matter where you are is to encrypt your computer and be careful not leave it where someone can mess with it, said Mitnick.

Phone footprint

Another source of vulnerability for many people is a smartphone. As long as you have a phone, you’re on the grid—phone companies can triangulate your position using cell phone towers and location services, and they log your calls. Beyond that, though, there are steps you can take to limit information people can access about you using your phone.

First, be judicious when installing apps. Carefully read the permissions an app requires for installation, and if you’re uncomfortable with them, don’t install it! Read privacy policies and terms of use so you know what data the app keeps on you.

Because I have a Windows phone, many of the basic apps (alarms, maps, Internet Explorer, music, and Microsoft Office) are Microsoft apps and use their terms of use and privacy policy, which is pretty good about not sharing my information with third parties. They also delete your account data after you delete their app, though it may take a few weeks.

I have several social apps, such as the aforementioned Facebook and Pinterest, for which the privacy settings are fairly similar to their desktop counterparts—not very private—with the added bonus of them now having access to my location and phone number. It’s entirely possible—and advisable, if you’re trying to leave a minimal footprint—to live without these apps, but I choose not to.

I’m selective about the apps I install on my phone. Aside from the apps that come with the phone and my social media apps, I only have Uber—and that has a lot of access to my phone. According to the app information, Uber can access my contacts, phone identity, location, maps, microphone, data services, phone dialer, speech and web browser. That’s a lot, and not all of it seems necessary—why does Uber need my contacts? Again, though, I chose to compromise my privacy on this one because the convenience, for me, outweighed the risk.

A precaution I’ve always taken is turning off my location service unless I need it. While my cell phone company can still track me, this prevents my apps from accessing my location. I don’t need Pinterest or Facebook to know where I am to get what I want out of the app, so I don’t provide that information to them.

One of the projects Access Now has been working on is “super cookies”—when you use your cell phone, the cell companies can attach unique identifiers to your browsing as you go across multiple sites. Many companies don’t even offer opt-outs. AT&T has now stopped using super cookies, but other companies still do so.

If you don’t already, use two-step verification whenever possible to ensure that no one but you is logging onto your accounts. This process, used by Gmail, has you enter your password and a one-time numerical code texted to a phone number you provide.

Set a passcode to your phone if you haven’t already, and make it something people couldn’t easily guess—don’t use your birthday, for example. I’ve started using random numbers and passwords generated for long-defunct accounts like my middle school computer login that I memorized years ago but that can’t be linked back to me.

Amie Stepanovich of Access Now suggested using four unrelated words strung together for online account passwords—they’re even harder to hack than the usual suggestions of capital and lowercase letters, symbols and numbers.

One final precaution you can take is to encrypt your device. Apple has already started encrypting its phones by default, and Google has promised to do so. Regardless, you can turn on encryption yourself. I have a Windows phone, which does not allow for easy encryption—in fact, I can’t encrypt my SD card at all. To encrypt my phone, I need to log in to Office 365 on my laptop and change my mobile device mailbox policies to require a password, encryption, and an automatic wipe after a number of passcode fails I choose. I then log into Office 365 on my phone to sync the new settings. It’s much more straightforward for an Android—just go to settings, security, and choose “Encrypt phone.”

Off the grid? Not even close

For me – and most people, it’s not feasible to live entirely off the grid. Between my debit card, various online accounts and smartphone, I pour my personal data into company and government databases every day. The trick is to live on the grid intelligently, only providing the information that is necessary and taking steps to protect your devices from unauthorized access.

FAA backed away from proposing privacy regulations for drones – but that might be a good thing, experts say

WASHINGTON—When the Federal Aviation Administration released its proposed “framework of regulations” for governing the commercial use of small unmanned aircraft systems last month, people were surprised. After years of failing to act on a 2012 congressional order to develop regulations, the FAA’s proposal seemingly fell from the sky – unexpected, and as it turns out, an unexpected gift to the drone community.

But noticeably missing from the proposed regulations? Privacy.

And the FAA owned up to it. In a privacy impact assessment issued along with the proposed framework, the agency stated that it “acknowledges that privacy concerns have been raised about unmanned aircraft operations. … These issues are beyond the scope of this rulemaking.”

That makes sense, according to Matt Waite. Privacy is not in its wheelhouse.

“The FAA has said all along that it is not a privacy organization – It is an aviation safety organization. They don’t have the experience or the skill[set] to be in the privacy business,” Waite added.

A professor of journalism and founder of the Drone Journalism Lab at the University of Nebraska-Lincoln, Waite said that the FAA more or less intentionally walked away from building privacy regulations into its proposal. “They had been talking about it and had been claiming that that was the reason it was all being delayed [as] they were considering privacy regulations … But ultimately, nothing.”

Waite said that the implications of that choice suggest that states are going to have to make up the difference.

“The FAA has wisely backed off all privacy issues [because] there’s no need for a new federal privacy bureaucracy [when] states already have protections in place,” said Charles Tobin, a privacy rights lawyer and partner at Holland & Knight.

“The laws that are on the books are all technology agnostic. They apply to computers, they apply to still cameras, they apply to wireless microphones, they apply to video cameras … and there’s no reason that they can’t be applied – as already written – to UAVs,” Tobin added.

He said he understands why people are concerned, but suggests we look to history for any insight we might need. “Since the turn of the century, people have expressed concerns about every single new phase of technology [that has been] developed to allow people to gather information in public places and private places, and so over the decades, states have developed a strong series of statutes and precedents in the courts that deal with electronic surveillance, eavesdropping, trespassing and just about any other concern for invasion of privacy.”

To add additional statutes would be more than redundant, Tobin said. It would be confusing for everyone involved. It also leaves the possibility that one law could potentially violate the other.

While recognizing that the FAA made the appropriate call when it chose to step aside, Tobin said the baton has simply been passed on down the line. A presidential memorandum issued the same day as the FAA’s proposed regulations relays the responsibility to “develop a framework regarding privacy, accountability, and transparency for commercial and private UAS use” to the Department of Commerce. The memo states that the department must initiate a “multi-stakeholder engagement process” within 90 days of the memo’s release – so it must begin work by mid-May. According to Tobin, “the development of private industry best practices” by the Department of Commerce is a positive step – but it should avoid stepping further.

Government trying to involve itself in the regulation of a specific piece of technology is just a terrible idea, Waite said. “As we are already seeing, the government lags way behind technology when it comes to laws that would deal with that technology. It’s taken the FAA a long time to come up with rules for these drones and they’re flying around right now. They’re being used for commercial purposes even though the FAA says, ‘No, you can’t do that.’” Law will forever lag behind technology, he said.

“So if that’s the case, then legislatures and policymakers need to acknowledge and accept that and begin to craft rules that are technology agnostic,” Waite added. Because therein lies the solution to any concerns that privacy might be invaded.

Waite said that the key is deciding what we don’t want people to do – what we need to prevent from happening. “We need to start thinking about what we consider a reasonable expectation of privacy in our modern times. And if that’s not allowing [me to] photograph [someone] streaking in their backyard, then that’s great. We can say I can’t do that. But it shouldn’t matter how I do that, [just that] you don’t want me to do it.”

It’s about understanding what we’re offended by. And then realizing that if privacy was violated, then how it was done is unimportant, he added.

The drone-related privacy concerns of the average American are actually pretty obvious, Waite said. They’re afraid of a drone operator peering into their windows like a 21st Century peeping tom, or using them to stalk and harass people. And they’re also afraid that someone might gather information about them and their behaviors.

Amie Stepanovich, senior policy counsel for privacy advocacy group Access Now, said these concerns are genuine because drone technology is in a league of its own. “Drones have [the] capacity to bring a bunch of different surveillance technologies onto a singular platform and to reach into areas that other vehicles have not been able to get to. For example, up into very high buildings or into inside spaces.”

But many of the acts people are fearful of are actually crimes, Waite said. They’re already illegal. “It is illegal for you to fly up and peer in[to] someone’s window, those peeping tom laws already handle that.” He admitted that some states aren’t as advanced as others because they require that an offender physically be on the property to be prosecuted as a peeping tom. “[But] that doesn’t take a great leap of mind to fix that real quick,” he added.

Gathering information through surveillance is a different issue, however, one steeped with potential for abuse. Stepanovich said that limitations should be put in place to restrict the ways in which government agencies can use drone technology. “It’s highly advanced and gives them a great deal [of] increased capability and can be used to collect a great deal of information,” she said.

“We need things that will, for example, protect users’ location information from being collected and tracked. … It comes back to tracking people over time without a warrant and being able to pinpoint their exact location. And this is true with drones but … there are several other different kinds of technologies that are coming out. And we need to make sure that that information is adequately protected.”

The presidential memo issued in conjunction with the FAA’s proposal states that agencies must “comply with the Privacy Act of 1974, which, among other things, restricts the collection and dissemination of individuals’ information that is maintained in systems of records, including personally identifiable information.”

The White House’s assurance that government agencies will be held accountable to legacy privacy standards is a good thing, Stepanovich said, but she recommends further attribution and transparency.

“The FAA has a publicly accessible database of who is able to fly airplanes in any specific geographic area in the United States. But they haven’t made a similar commitment to do that for drone operators,” Stepanovich said. She calls that a double standard.

People won’t know which agency, company or person is behind the remote of the drone flying over their homes. They’re already fearful, so that’s not the best way to go about this, Stepanovich added.

“And so the FAA definitely has a role to play in protecting privacy,” and she recommends the agency operate a full registry. “We’re talking about transparency, requiring that drone users register what technology they are deploying on their drones, and what capacity these drones will have. This just gets at making sure people are aware of what’s going on in their own area,” she added.

“But it should be up to Congress and other agencies to ensure that users don’t violate one another’s privacy rights.” That requires a separate law, but Stepanovich said it would be a mistake to make a new law for a singular piece of technology.

Like Waite and Tobin, she advises technology agnosticism when it comes to lawmaking. Because technology changes frequently. And for that same reason, Stepanovich said the drone privacy debate is an important one: “It will definitely be worth paying attention to because it’s really deciding the future of this technology in the U.S.”

All three agree that the next 24 months will be very exciting. “We’re sort of in the early years of the Wild West stage here, where the rules and the court cases [haven’t happened] yet,” Waite said. “But things are going to happen and they’re going to be tested in court and they’re going to be squared to our constitutional values and when they are, we’ll actually have a fairly stable system.”

“But until then you’re going to have some crazy stuff going on,” Waite added. “You’re going to see people doing things that were never envisioned and you’re going to see [drones] being used in ways that we hadn’t thought of yet. And some of that’s going to be cool and neat and some of it’s going to be kind of ugly.”

One thing is guaranteed: The waiting game has just begun.

White House pushes for student data regulations

WASHINGTON — When the educational company ConnectEDU filed for bankruptcy about a year ago, it tried to do what any business would — sell off its most valuable asset: student data.

Millions of students submitted personal information such as email addresses, birth dates and test scores to the college and career planning company.

The Federal Trade Commission eventually stopped any transactions involving the data after noting that they violated ConnectEDU’s privacy policy.

Some student educational records are protected through the Family Educational and Privacy Rights Act, or FERPA. Originally signed into law in 1974, FERPA essentially protects the records schools collect on students and gives parents certain oversight and disclosure rights.

The growing influence of technology in classrooms and in administrative data collection, though, is making FERPA out-of-date.

Teachers, students and parents now routinely submit information to educational services companies, such as ConnectEDU. FERPA does not regulate how these companies use that data. And there is no other federal law that does. The companies’ own privacy policies are the only limit to what the companies can do with the information users provide.

The concern is that ConnectEDU may not be the only education technology company that is trying to sell its data to third parties.

ConnectEDU’s databases, for example, were filled with students’ personally identifiable information including names, birthdates, email addresses and telephone numbers. The sale of that information to other companies is not regulated.

In order to make FERPA up-to-date, President Barack Obama, in conjunction with partners in the private sector, called for a legislation to establish a national standard to protect students’ data in January.

“It’s pretty straightforward,” Obama said in a speech at the Federal Trade Commission. “We’re saying the data collected on students in the classroom can be used for educational purposes — to teach our children, not to market to our children. We want to prevent companies from selling student data to third parties for purposes other than education. We want to prevent any kind of profiling about certain students.”

Dubbed the Student Digital Privacy Act, the White House’s plan is loosely based on a 2014 California law that prohibits third-party education companies from selling student information. While other states have laws regulating and increasing the transparency, regulation and collection of student data, the California law seems to be the most far-reaching.

Because FERPA doesn’t cover third-party use, some private sector leaders have taken a vow to establish clear industry standards for protecting student data through the Student Privacy Pledge.

Created by the Future of Privacy Forum and the Software and Information Industry Association in the fall of 2014, Obama mentioned the pledge as an encouraging sign for the protection of student information.

“I want to encourage every company that provides these technologies to our schools to join this effort,” Obama said. “It’s the right thing to do. And if you don’t join this effort, then we intend to make sure that those schools and those parents know you haven’t joined this effort.”

So far, 123 companies have signed the pledge, including tech and education giants such as Apple, Microsoft, Google and Houghton Mifflin Harcourt.

“There was a lack of awareness, information and understanding about what school service providers did and didn’t do with data and what the laws required and allowed,” Mark Schneiderman, senior director of education policy at SIIA, said. “Rather than waiting for public policy and public debate to play itself out, we figured, let’s just step in and make clear that the industry is supporting schools, is using data only for school purposes, not selling the data, not doing other things that there was a perception out there that maybe [companies were doing].”

The National Parent-Teacher Association and other groups support the pledge, according to Schneiderman.

“It is imperative that students’ personal informational formation is protected at all times,” the National PTA wrote in a statement.

The companies that signed the pledge are not subject to any policing body, but by signing the pledge they show consumers their commitment to student privacy, Schneiderman said.

But many notable educational technology companies, like Pearson Education, have not signed the pledge. Pearson was recently the subject of a POLITICO investigative report that revealed that the company’s use of student data was unmonitored.

According to the report, Pearson claims it does not sell the students’ data it collects.

The College Board, ACT and Common Application are often viewed as integral to the college admissions process, but are also not included in the pledge.

Instead, these education companies point consumers to their privacy policies, which can often be difficult to understand because of the legal jargon and ambiguous terms.

Some groups such as the Parent Coalition for Student Privacy think the pledge and the privacy policies aren’t enough.

“We also need strong enforcement and security mechanisms to prevent against breaches,” Leonie Haimson, one of the group’s co-chairs, said in a statement responding to Obama’s speech. “This has been a year of continuous scandalous breaches; we owe it to our children to require security provisions at least as strict as in the case of personal health information.”

Out of the 12 commitments listed in the pledge, only one deals with preventing leaks or breaches.

The signees must “maintain a comprehensive security program that is reasonably designed to protect the security, privacy, confidentiality, and integrity of student personal information against risks,” the pledge states.

Haimson said the policies are a decent start, but do not go nearly far enough in protecting educational data.

Regardless, a bill for a comprehensive national standard has yet to be introduced despite the White House’s push.

In early February, though, the White House said that it had been working closely with Republican Rep. Luke Messer of Indiana and Colorado Democrat Rep. Jared Polis to introduce a bipartisan bill to Congress.

The bill’s release is expected by the end of the month, according to Messer’s office.MINTZERPRIVACY (9) 2

Long-ignored government practice lets IRS skirt fourth and fifth amendments

WASHINGTON — When Jeffrey Hirsch went to deposit money at his bank one morning in May 2012, his whole life changed. The teller told him the entire contents of his account—nearly half a million dollars—had been seized by the Internal Revenue Service.

Hirsch, a small business owner from Long Island, was never accused of a crime. Yet he would not see his $446,651.11 again for nearly three years due to the IRS’s civil asset forfeiture program, which allows the agency to seize money without filing criminal charges and keep it, in many cases, indefinitely.

Under federal law, banks are required to report cash deposits exceeding $10,000 to the Treasury Department, and account holders are forbidden from “structuring” deposits smaller than the $10,000 threshold to avoid the reporting requirement. If the IRS suspects someone is “structuring” their deposits, it can take their money without filing a criminal complaint.

The program was designed to help the federal government intercept the drug trade during the 1980s “war on drugs.” But the IRS has increasingly gone after small business owners and others who make frequent, small deposits.

“These laws were intended to target drug dealers and other hardened criminals engaged in money laundering or other criminal activity,” said Robert Johnson, Hirsch’s attorney. “In practice, however, the IRS enforces the structuring laws against innocent Americans who have no idea that depositing less than $10,000 in the bank could possibly get them in trouble with the law.”

Hirsch owns and operates Bi-County Distributors, a small business that distributes products to convenience stores on Long Island. The company had multiple accounts closed due its frequent cash deposits, which—when more than $10,000—require burdensome paperwork from the bank. Hirsch’s accountant recommended staying below the limit, so Hirsch often made cash deposits under $10,000.

On the basis of civil asset forfeiture, the IRS seized Hirsch’s money in May 2012 and held it for more than two years without issuing any charges against him. Twice, Hirsch said, the government offered settlements that would require him to surrender “a substantial portion” of the money.

“I rejected these offers as I felt that I had done nothing wrong and should not be forced to give up my hard-earned money for no reason,” Hirsch said. “I lived with that stress for over two-and-a-half years.”

Hirsch said the seizure drove his business “to the edge of insolvency,” forcing him to take extended lines of credit. In an attempt to demonstrate his innocence, he paid an accounting firm $25,000 to audit his own business.

“Government officials did not question the results of the audit and did not suggest that they were in possession of any evidence of wrongdoing by anyone associated with the business,” Hirsch said. “Nonetheless, the government still refused to return the money.”

To get seized funds back, property owners have to go to court against the Department of Justice—often a lengthy and expensive process.

In January, after a front-page story on civil asset forfeiture was published in The New York Times, the government agreed to return Hirsch’s money.

“In this country, people are supposed to be innocent until proven guilty. But, in the eyes of the IRS, I was guilty until proven innocent—forced to prove my own innocence to get my property back,” Hirsch said. “No other American should be put through the nightmare I experienced.”

But Hirsch’s case is not unique.

Documents obtained by the Institute for Justice, a national law firm that litigates property rights, show that the IRS conducted more than 2,500 of these seizures from 2005 to 2012.

In that seven-year period, the agency collected more than $242 million in suspected structuring violations. At least a third of those seizures “arose from nothing more than a series of cash transactions under $10,000, with no other criminal activity alleged,” according to the report.

And under federal law, the IRS gets to keep this money. Funds seized through civil forfeiture are deposited in the Treasury Forfeiture Fund, which is available for use by the IRS without any appropriation by Congress.

“Shockingly, the government uses the money that it takes through civil forfeiture to pad the budgets of the very agencies that seize the money,” said Johnson, who also works for the Institute for Justice. “The result is a legal system in which the deck is stacked against ordinary Americans.”

While the issue went largely unnoticed until late last year, lawmakers on Capitol Hill—both Republican and Democrat—are now looking for change from the long-embattled agency.

The House Oversight Subcommittee’s first hearing of the new Congress called on IRS Commissioner John Koskinen to testify. He was met with harsh criticism.

Rep. Mike Kelly, R-Pa., went so far as to compare civil asset forfeiture to torture. “You talk about waterboarding, this is waterboarding at its worst,” he said.

The IRS has promised to change. Koskinen apologized to the small business owners at the hearing and said the agency would no longer pursue civil seizure on structuring grounds “unless there are exceptional circumstances.”

“We’ve changed the policy from our standpoint,” Koskinen said.

But Johnson isn’t satisfied with the IRS’ promise.

“The only surefire reform of civil forfeiture is to eliminate the practice entirely, and to require all forfeiture to proceed under the criminal laws,” Johnson said. “Short of that, the IRS policy change—limiting application of the structuring laws to funds derived from illegal sources—should be codified in statute, and without any open-ended loophole for ‘exceptional’ cases.”

Many lawmakers also aren’t satisfied with the IRS’s “exceptional circumstances” standard.

In January, Sen. Rand Paul, R-Ky., and Rep. Tim Walberg, R-Mich., introduced a bill that would curb IRS forfeiture abuses by stopping the IRS from seizing funds without criminal charges and make it simpler and faster for innocent property owners to get their money back.

The bill is only in the primary stages of the legislative process, but some sort of remedial legislation is likely to receive support.

“It is wrong without any criminal evidence to seize anyone’s property,” Kelly said. “This flies in the face of everything we are as a country.”

In ‘Parks and Recreation,’ a vision for the future of consumer data privacy issues  

On a sunny morning in Pawnee, Indiana, a notification pops up on Leslie Knope’s phone: “Open Your Door.” Looking outside, she finds a drone at her doorstep, floating effortlessly, cradling a box addressed to her.

“Hey, Leslie Knope!” it chimes as it drops its cargo.

People have only been able to use drones for recreational, research or government purposes in the U.S., but the Federal Aviation Administration has proposed rules that would expand drones for any use, especially for commercial purposes. Yet the final season of NBC’s “Parks and Recreation,” set in a not-too-distant 2017, envisions a world in which your internet provider can listen to your every conversation, read every email and text, and use that information to predict your mood and deliver packages to your door. The offending company is Grizzyl, a bubbly, gleefully 21st century Internet and cell phone provider that shamelessly violates its customers’ privacy.

For ardent libertarian Ron Swanson, who destroys a drone and brings it to Leslie, (“This is a flying robot that I just shot out of the sky when it tried to deliver me a package”), the threat of such technology is philosophically horrifying, bringing him together with the liberal Knope to try to stop the behavior. While he originally blames others for making themselves vulnerable to that kind of invasion, he later changes his tune when his own privacy is threatened outside of his control.

For liberal Knope, the concern is more universal, with the actions of a corporation infringing upon its customers rights concerning from a populist perspective. As in many episodes, she sees the government serving as an activist voice, protecting its citizens from harm from an ill-intentioned private company.

By placing characters only two years from now, the show’s creators envisioned a future that’s within our reach. In the show’s view, the future has troubling implications for consumers, with sophisticated technology making it easier than ever for companies to pry into their user’s lives.

Below, we’ve compiled a list of technologies and actions made by Grizzyl. With their predictions of a soon-to-be future in mind, we examine the likelihood of each event coming true, and the current legal structures that govern them.

Use of commercial drones

In the show: After listening to its users phone calls, Grizzyl gathers its customers’ personal desires and sends them gifts they think they’ll appreciate via drone. While Donna receives two honey bears and boxes of sugarplums, coincidentally the pet names she and her fiancé use for each other, the characters on the show catch on to Grizzyl’s unethical business practices.

Today’s laws: Americans have very few options allowing them to use drones for commercial purposes. Companies may apply to the Federal Aviation Administration to authorize use of drones on a case-by-case basis. However, no existing legal framework allows for the widespread adoption of drones on a commercial basis, and the FAA describes its approach to the emerging technology as “incremental,” suggesting that you won’t see pizza-delivering drones anytime soon. The FAA Modernization and Reform Act of 2012 aimed to integrate unmanned aircraft by this year, but a recent government audit found that the FAA wouldn’t meet its September deadline. “There should be an eye toward integrating drones into our national airspace,” Peter Sachs, a lawyer specializing in drone law, said about these proposed regulations.

Tomorrow’s technology: When online retailer behemoth Amazon announced “Amazon Prime Air” last year, it seemed like an elaborate April Fool’s prank. Yet the company is dead serious about using the technology to deliver packages in as little as 30 minutes, sending the FAA a letter pushing for greater reforms. While Amazon predicts that drone deliveries will eventually be “as normal as seeing mail trucks on the road,” time will tell when their vision becomes a reality. However, with the FAA’s proposed regulations, drone operators would be required to stay within “eyesight” of their craft, according to Sachs. With this stipulation, it would be near impossible for vendors to use drones for deliveries.

Consumer data mining

In the show: After the characters receive individualized gift packages delivered by drone from Grizzyl, they quickly realize the only way they would have learned this information about them is through monitoring their calls and texts. Later, when Leslie visits the Grizzyl headquarters in disguise, the Grizzyl vice president of “Cool New Shiz” reveals he knew who she was all along by tracking her location from her phone. He says his company may know Leslie better than she knows herself. He tells her, “There’s nothing scary about Grizzyl. We just want to learn everything about everyone, track wherever they go and even what they’re about to do.”

Today’s laws: Despite the growing fascination with consumer privacy and cybersecurity in recent years, especially in the wake of Edward Snowden’s revelations about the National Security Agency’s program to gather millions of Americans’ phone and email records, no laws have yet to intensely regulate the act of consumer data mining. In Sorrell v. IMS Health Inc., the Supreme Court found that a Vermont statute that restricted the sale, disclosure and use of records that revealed the prescribing practices of individual doctors violated the First Amendment rights of data mining companies hired by pharmaceutical manufacturers. In a powerful feature story for Time Magazine in 2011, author Joel Stein sums up the current state of data mining for consumers: He contacts a range of private companies that gather information about him “in stealth,” creating a detailed picture of his life that’s been culled without his knowing.

Tomorrow’s technology: Though the debate about gathering and use data has typically been about government surveillance of private exchanges, companies such as Google, which could be seen as the real-life Grizzyl, already monitor emails sent over their Gmail network in order to tailor advertisements shown to particular Internet users. As Stein’s 2011 feature shows, companies already have an incredible ability to gather people’s information, something that will likely continue to grow unless Congress passes legislation limiting it.

Consumer agreements

In the show: When Leslie Knope discovers the data mining, she brings a lawsuit against Grizzyl. Leslie’s husband Ben argues that the agreement giving Pawnee free WiFi explicitly banned data mining. However, the company was able to sneak a clause “into the 27th update of a 500 page user agreement,” allowing them to monitor all communications sent over the network through Grizzyl products. As Ben said, “a person should not have to have an advanced law degree to avoid being taken advantage of by a multi-billion dollar company,” a sentiment oft repeated in today’s on-the-grid society. Ben compelled Grizzyl to be “upfront about what you’re doing and allow people the ability to opt out.”

Today’s laws: According to Ira Rheingold, executive director of the National Association of Consumer Advocates, the U.S. has little protection for consumers against how a private company constructs its consumer agreements. A report released by the Consumer Financial Protection Bureau, an independent government agency formed by the 2011 Dodd-Frank Wall Street reforms, showed that consumers often hand over their rights in consumer agreements without realizing it. They found that in 92 percent of credit card disputes that went to arbitration, consumers had signed contracts precluding their ability to sue without realizing it. In effect, even the savviest consumer, like Ben Wyatt, can be thwarted by a legal document that buries its most damaging clauses under pages of legal jargon, something that’s become commonplace in our society.

Tomorrow’s technology: When consumers sign these consumer agreements, they may unknowingly give up their right to sue, effectively stripping themselves of their right to take these corporations to trial in the event of an injustice. Sen. Al Franken, D-Minn., has championed the Arbitration Fairness Act, which works to “restore the rights of workers and consumers” in assuring them of transparency in civil litigation and prohibiting the usage of forced arbitration clauses in consumer agreements. While the bill has unsuccessfully been introduced in Congress since 2011, Franken plans on reintroducing it during this session.

 

Private sector remains wary of government efforts to increase cybersecurity collaboration

WASHINGTON– President Barack Obama and lawmakers have announced plans to increase information sharing between the government and the private sector following data breaches at major companies. But companies are hesitant to join these initiatives because of liability and privacy concerns – and sharing information could put them at a competitive disadvantage.

Experts agree information sharing is essential in preventing and responding to cyber attacks, but the government and private sector bring different perspectives and strategies to mitigating the threats.

Companies need to take the approach that there is “strength in numbers,” said Greg Garcia, executive director of the Financial Services Sector Coordinating Council.

“To the extent that we can have what amounts to a neighborhood watch at a national scale, then were going to be better aware of the adversaries and what they’re up to and what they’re trying to do,” Garcia said.

One area where progress has been made is in the sharing of cybersecurity threat indicators, which identify the source of cyber attacks, said Mary Ellen Callahan, former chief privacy officer at the Department of Homeland Security. These indicators can include bad IP addresses, malware that’s embedded in emails or specific coding in software, she said.

DHS and the Mitre Corporation have developed programming languages to improve communication about cyber threat information between the government and the private sector. Structured Threat Information Expression and Trusted Automated Exchange of Indicator Information, known as STIX and TAXII respectively, are used in tandem to quickly share the information.

“It’s one thing to have these executive orders and things, but it’s another to have the technical enablers to make it easy for these companies to do it,” said John Wunder, lead cybersecurity engineer at Mitre. “You want to make it easy to share threat information in a way that you share exactly what you want.”

Yet, these programs haven’t fully developed and more participation is needed to make them effective, said Judith Germano, a senior fellow at New York University School of Law’s Center on Law and Security.

“I hear from companies that they are often less concerned about where the threat is coming from, but what is the threat and what can they do to stop it,” she said. “That’s the valuable information. Some of that is being shared and is very helpful, but it needs to be expanded.”

Last month, Obama announced an executive order promoting cybersecurity information sharing. The order encouraged the development of information sharing and analysis organizations to spearhead collaboration between the private sector and government. He tasked DHS with creating create a nonprofit organization to develop a set of standards for ISAOs.

Despite these efforts, robust information sharing is still lacking.

“Everyone wants information. Nobody wants to give information,” said Mark Seward, vice president of marketing at Exabeam, a big data security analytics company.

Companies fear sharing information with the government could reveal corporate secrets or consumers’ private information, said Martin Libicki, a senior management scientist at the RAND Corporation. He added sharing information with the government could also pose legal risks if the information shows companies did not follow federal regulations.

Germano, who also runs a law firm focused on cybersecurity issues, says cybersecurity collaboration comes down to a matter of trust. The private sector, she said, is weary of the government.

“On one hand [the government is] reaching out as a friend and collaborator to work with companies,” she said. “On the other hand, the same government has an enforcement arm outstretched with the FTC, the SEC that if you do not comply, there can be repercussions, possible lawsuits and other regulatory action taken against you.”

Therefore, only information that is directly related to a threat should be shared and stored, said Callahan, now a partner at Jenner & Block. Further, she said when companies share a large amount of information at once it slows down the process of assessing the threat and they often share more information than is necessary.

The U.S. also lacks “an intelligent and forceful deterrence strategy” for cyber attacks, said Matthew Eggers, senior director of the U.S. Chamber of Commerce’s national security and emergency preparedness department, at a Congressional hearing earlier this month. He also said the government needs to provide more assistance to companies who have suffered from hacks.

“U.S. policymakers need to focus on pushing back against illicit actors and not on blaming the victims of cybersecurity incidents,” Eggers said. 

To address some of these concerns, Sen. Tom Carper, D-Del., introduced in February the Cyber Threat Sharing Act of 2015, which looks to provide liability protections for companies when they share cyber information with the government.

The bill would prohibit the government from using shared cyber threat data as evidence in a regulatory action against the company that shared the information. It also strengthens privacy protections and limits how shared data could be used. The bill has been referred to the Committee on Homeland Security and Governmental Affairs.

In February, Obama also called on the Director of National Intelligence to create the Cyber Threat Intelligence Integration Center, a national intelligence center aimed at “connecting the dots” on cyber threats. The center will “collect intelligence, manage incident response efforts, direct investigations” among other responsibilities.

However, experts remain skeptical about the center.

“What concerns me about that is if you read the president’s memoranda on [the Cyber Threat Intelligence Integration Center], it says that it’s consistent with privacy and civil liberties protections as relevant to that agency,” said Callahan, the Jenner & Block lawyer. “Well, the intelligence community, as you know, has reduced private protections.”

The center’s framework will be similar to that of the National Counterterrorism Center, which is a concern for Libicki, of the RAND Corporation.

“The last cyber attack had elements of terrorism in it. Does that mean we should look at this entire problem purely through the lens of counterterrorism?” Libicki said. “Why are you duplicating a methodological framework that culminates in a set of actions, like predator drones, which are totally inappropriate for cyber?”

Kathleen Butler, a spokesperson for the Office of the Director of National Intelligence, did not have any additional comment beyond the president’s announcement of the center as she said initial planning is still underway.

While experts say it will take time for the private sector to fully engage in the information sharing initiatives, the government’s efforts have been mostly positive.

“This is about enabling people to share what they know and get access to what others know such that protection can be more pervasive,” said Bobbie Stempfley, Mitre’s director of cybersecurity implementation. “That’s really a powerful concept.”