This year at HOPE, the biennial hacker conference in NYC, several incidents led to a statement of no-confidence in HOPE's code of conduct. You can read more in the statement itself, but the gist is that there were a small number of far-right attendees, including a self-described "nationalist", who were disrupting and intimidating both talks and attendees. The staff didn't handle it well. This is unfortunately consistent with my own experience. I know that the top-level organizers of HOPE genuinely care about creating an inclusive environment, but the current staff and procedures aren't working. Based on both my past experiences at HOPE and on my experiences creating inclusive spaces, here are some observations about where the process is failing and what can be done.
Trained Staff. CoCs need to be applied consistently to create an environment of trust and safety. Enforcement of the CoC at HOPE has been highly variable between staff members, with some seeming to think that it's optional. HOPE requires volunteer work from hundreds of people, but not all of them need to be trained to handle CoC complaints, only the points of contact, which leads to the next point...
Points of Contact. The CoC listed a phone number and an email address for making CoC complaints. If you are being physically threatened, the need is immediate and a phone call or email is not sufficient. Having a single person on call isn't sufficient either. The venue spans 18 floors of the Hotel Pennsylvania, where the line for the elevators takes about the same time as your flight to NYC. Anywhere there are attendees, there need to be identifiable, well-trained points of contact for immediate response to CoC complaints.
Intimidation. Over the past several events, the staff has shown a pattern of refusing to enforce the CoC until a physical assault has occurred, meaning that threats and intimidation have been treated as totally allowable. People who have been going to HOPE for a long time often describe it as feeling like "home." There's no way to have that feeling if you're being threatened or harassed. I understand the importance of free speech in the hacker ethos, but speech can be separated from intimidation. In the recent incident, one attendee bragged (on mic to an entire room) about attending an alt-right rally. That was intimidation, and should have been a sufficient CoC violation to get him removed from the event.
Validation. There has been much less attention to this point, but it's an important one: the effectiveness of a response to a complaint depends just as much on emotional connection as it does on the concrete outcome. Typically, when HOPE staff aren't able to act, they have minimized and invalidated the experience of complaining attendees. Unicorn Riot reported one staffer responding to a complaint by saying he wouldn't care even if an attendee came in with a swastika flag. Instead, just taking the time to understand and discuss a complaint can go a long way towards creating an environment of trust, even if no concrete action is possible.
I'm already looking forward to the next HOPE. A lot of attendees are questioning whether they want to attend again, but I believe in the organizers and in the majority of the community. There's a lot of work to do, but HOPE can and will fix this.
Decentralized network architectures can protect against vulnerabilities not addressed by strong encryption. Encryption works well, but only when private keys can be kept secret and ciphertext can get to its destination intact. Encrypted messages can be surveilled by acquiring private keys (FBI and Lavabit/Apple), man-in-the-middle attacks (NSA QUANTUM), or censored by blocking communication entirely (Pakistan and YouTube). These attacks are difficult to protect against because they are social rather than technological. But they all have one thing in common: they require centralization. Censorship and man-in-the-middle attacks target communication bottlenecks and legal coercion targets a small number of legal entities. This talk will discuss decentralized approaches to attack tolerance, including ongoing original research.
Naomi Colvin, Courage Foundation
Nathan Fuller, Courage Foundation
Carey Shenkman, Center for Constitutional Rights
Grace North, Jeremy Hammond Support Network
Yan Zhu, friend of Chelsea Manning
Lauri Love, computer scientist and activist
Naomi begins. Courage foundation started in July 2014 to take on legal defense for Edward Snowden. They believe there is no difference between hackers and whistleblowers when they’re bringing important information to light. Information exposure is one of the most important triggers for social action. The foundation has announced their support for Chelsea Manning, who is beginning her appeal.
Carey wants to talk about public interest defenses. In the US, politicians call for whistleblowers to return to the US and present their motivations and “face the music” as part of their civil disobedience. In reality, the Espionage Act and the CFAA do not allow whistleblowers to use public interest motivations as a defense or use evidence to demonstrate a public interest. Whistleblowers like Ellsberg have called for public interest exceptions to the Espionage Act and CFAA. It’s important to protect civil
Grace: Jeremy Hammond hacked Stratfor with AntiSec/Anonymous and leaked the files on WikiLeaks. Tech has an incredible power to bring movements together. This requires some tech people to move outside of their comfort zone to interact with activism around other (non-technical) issues.
Yan: Chelsea really loves to get letters and reads every one she receives. While at MIT, Yan met Chelsea through the free software community. Chelsea’s arrest was the first time Yan saw the reality of whistleblowing and surveillance. Chelsea is very isolated and only gets 20 people on her phone list. To add someone, she has to remove someone else. Visitors have to prove they met her before her arrest in order to visit. She reads a statement from Chelsea Manning.
Lauri Love was arrested in 2013 in the UK without charges and later learned that he was charged in the US, which is very unusual, not having been alleged to have committed any crimes within the US. The problems with the CFAA are being used to contest his extradition to the US. Lauri greatly appreciates the support of the Courage Foundation
Lauri asks Carey: The UK allows you to present a defense of necessity, although it’s difficult. Can that happen in the US?
Carey: It’s difficult to say. The CFAA has always been controversial. The CFAA was passed in 1986, literally in response to the movie Wargames, which sparked the fear of a “cyber Pearl Harbor.”
Grace: Jeremy Hammond’s hacks were politically focused, but there were absolutely no provisions for acts of conscience in his defense. In fact, having strong political views leads to harsher treatment.
Lauri: Can we bypass the court system and national council of conscience?
Lyn: Mother of Ross Ulbricht, creator of silk road. Judges cited political views as reasons for the severity of Ross’s sentence. The court system also allows prosecutors to break the letter of the law when it is done in “good faith” while it doesn’t allow defendants to do so.
Lauri: It’s important to be vocal about these details. It
Audience question: CFAA prosecutions are really about politics, not about computers. It seems like some issues like gay marriage can change very quickly in American culture. What can be done to create these changes?
Grace: We couch issues in certain terms. What people find acceptable or unacceptable is often determined by perspective and simplified views of “legal” vs. “illegal.”
Live notes from HOPE XI
Garrett Robinson, CTO of Freedom of Press Foundation and Lead Developer of Secure Drop
Secure Drop debuted at HOPE X in 2014. FPF was founded in 2012, initially to crowd fund for WikiLeaks. They’ve been doing more crowdfunding for various open-source encryption tools for journalists, and for whistleblower Chelsea Manning’s legal defense. They’re also suing the federal government to respond to FOIA requests. But their main project is SecureDrop.
Over the past couple years, a lot of the plans for SecureDrop happened, and a lot changed. Last year, SecureDrop was installed at 12 independent organizations. They are a mixture of big and small media outlets, and activist nonprofits. All but one still uses the software, and several more have installed it, with a current total of 26 active installations. They’ve improved documentation to make installation easier. There’s a high demand. More organizations are waiting for help in installation and training.
FTF intentionally discourages media outlets from acknowledging which information was received via SecureDrop, which makes outreach complicated. SecureDrop was acknowledged as being used by hackers who exposed violations of attorney-client privilege by prison phone provider Securus.
As an internet service open to the public, SecureDrop attracts trolls. At The New Yorker, over half of their submissions were fiction or poetry.
Garrett gives an overview of how SecureDrop works. There are three main components: a network firewall, a monitor server, and an application server. When a source wants to submit to SecureDrop, they use Tor without JavaScript. They are assigned a secure pseudonym. They are then able to have a back-and-forth conversation with journalists. The journalist logs into the document interface Tor hidden service They are presented with a list of documents. All files are stored encrypted, transferred using USB or CDs to an air-gapped computer for decryption, reading, and editing.
The current project goals are: 1. keep SecureDrop safe, 2. Make SecureDrop easier for journalists to use, and 3. make SecureDrop easier to deploy and maintain.
Keeping SecureDrop safe. Tor hidden services enable end-to-end encryption with perfect forward secrecy without using certificate authorities. The NSA has had difficulty e-anonymizing Tor (as of 2012). Since then, the FBI has been able to install malware on users’ computers allowing the de-anonymization of Tor hidden services. In 2014, Tor identified a possible attack, which turned out to be a large-scale operation coordinated by multiple states, called Operation Onymous. This attack was made possible by researchers at CMU working with the FBI. Tor was able to blacklist the attackers and block them out eventually. Directories of hidden services have been shown to enable correlation attacks. A 2015 USENIX paper by Kwon et al. has shown that it’s possible to use machine learning and a single malicious Tor node to de-anonymize SecureDrop users, without leaving a trace.
Making SecureDrop easier to use. Doing experiments with using software isolation like Qubes to make air-gaps unnecessary. If the system isn’t usable, journalists either don’t use it or work around the security in harmful ways.
Making SecureDrop easier to deploy. There is a support process in place, but it’s inefficient. The documentation, when printed out, is 138 places. As an aside, Garrett wants a Guy Fawkes toaster.
Garrett wants to live in a world where SecureDrop is obsolete. He wants to live in a world where standard technology enables privacy at the level of SecureDrop. There’s been some progress. Open Whisper Systems has developed Signal and enabled end-to-end encryption for WhatsApp (even though there are security concerns with metadata collection).
Notes taken at HOPE XI.
Presenters:
asn
Nima Fatemi
David Goblet
Nima gives some basic stats. New board, 6 members. 8 employees, 12 contractors. 40 volunteers. 7000 relay operators. 3000 bridge operators. Dip in relay operators around April 2016 with corresponding spike in bridge operators.
Five teams work on TOR. The Network team is working on better crytography (Ed25519 and SHA3). TOR depends on 8 trusted computers around the world that maintain consensus, which are sometimes attacked with DDoS. There is now a backup list that can be used when no consensus is reached.
Application team doing ongoing development on the TOR browser. Porting the browser to mobile platforms. Doing research and development on sandboxing and usability. Also working on TorBirdy (email) and Tor Messenger (XMPP, OTR).
The UX team collaborating with security usability researchers. Can’t collect usage data because of privacy. Running user studies. Now have a security “slider.” Now display routing chain.
Community team drafting social contracts, membership guidelines, etc. Doing outreach, including the Library Freedom Project.
Measurement team received $152,500 grant from Mozilla. Revamping entire metrics interface.
Ahmia is a search engine for onion services. TOR now gives badges to relays.
Notes taken at HOPE XI.
Gabriella Coleman, Anthropologist, Professor, McGill University
Biella spent several years studying Anonymous. Found them “confusing, enchanting, controversial, irreverent, interesting, unpredictable, frustrating, stupid, and really stupid.” She expected to have to convince people they weren’t terrorists, but that didn’t happen.
The media usually refers to Anonymous as activists, hacktivists, or vigilantes, rather than terrorists. Pop culture has taken up Anonymous, which has helped inoculate them from the terrorism level.
The label can be used in the media to political ends, example: Nelson Mandela being labeled a terrorist. In France, the Tarnac 9 were arrested as terrorists for stopping trains, but the change was changed. In Spain, puppeteers were arrested for “inciting terrorism” and placed in jail for five days, but the case was thrown out. Common privacy tools like TOR and riseup can lead to suspicions of terrorism. Police and others in the US have tried to get Black Lives Matter designated as a terrorist organization.
Biella cites “Green is the New Red.” Since September 11, terrorism has been redefined to suit political whims, often impacting radical activists such as animal rights activists. What was once “monkey wrenching” or “sabotage” is now considered “terrorism.” A group called the SHAC7 were convicted under the Animal Enterprise Act for running a website that advocated for animal liberation. Many received multi-year jail sentences.
Focusing on technology, the language of terrorism has often been used to describe hacking. Biella was concerned she’d be targeted by the FBI for studying Anonymous. Targets of Anonymous described them as terrorists in the media. The hacktivist Jeremy Hammond was found to be on the FBI terrorism watch list, but because he was an environmental activist, not because he was a hacktivist. GCHQ described Anonymous and LulzSec as bad actors comparable to pedophiles and state-sponsored hackers. In the US, Anonymous was used as a primary example during Congressional hearings on cyberterrorism. The Wall Street Journal published a claim by the NSA that Anonymous could develop the ability to disable part of the US electricity grid.
Timing influenced how the public received these claims. The government compared Wiki-Leaks to Al-Quaeda, and compelled Visa and PayPal to freeze their accounts. These actions were seen as extreme by the public, and Anonymous launched Operation Payback, a DDoS of the PayPal blog, to protest. The media framed the event as a political act of civil disobedience. A month later, Anonymous contributed to social revolutions in the Middle-East and Span, and in Occupy. Anonymous has been described as incoherent for being involved in many different things, but this flexibility has helped inoculate them against the terrorist label. Anonymous is a “Multiple Use Name,” as described by Marco Deseriis.
The Guy Fawkes mask has played an important role. While the use of the mask was largely an accident, but carries connotations of resistance. These connotations were historically negative, but became positive with Alan Moore’s “V for Vendetta” and its film adaptation. There’s a feedback cycle between reality and fiction about resistance to totalitarian states. There’s an astounding amount of media about hackers. Biella’s favorite is called “Who Am I.” There’s even a ballet based on the story of Anonymous. RuPaul discussed with John Waters as a type of youthful rebellion. In contrast, animal rights activists are often portrayed as dangerous and unlikable.
ISIS uses social tactics similar to Anonymous. The difference between the two groups has created a contrast, amplified by Anonymous declaring war on ISIS.
In 2012, the Anti-Counterfeit Trade Agreement was under debate. The Polish population protested ACTA. Anonymous got involved with Operation ANTI-ACTA in support of Polish citizens. A number of Polish members of parliament wore Guy Fawkes masks to show disapproval for ACTA. The gesture helped to legitimate Anonymous and its tactics.
Tides can change very quickly. An infrastructure attack tied to hacktivists could turn the public against Anonymous. Art and culture really matter. Sometimes the world of law and policy sees art and culture as “soft power,” but Biella argues this is a false dichotomy. She appeals to people who work in the arts to continue, perhaps writing children’s books about hackers.
Phineas Fisher created a brilliant media hack on Vice News. Vice arranged an interview with Fisher, who requested to be represented by a puppet.
These notes taken at HOPE XI
What the hack?!
Aunshul Rege, Assistant Prof in Criminal Justice at Temple University
Quinn Heath, Criminal Justice student at Temple University
Aunshul begins. Work based on David Wall’s seven myths about the hacker community. How does the media portrayal of hacking differ from reality? Three objectives: 1. get hacker community’s perspective, 2. how does the hacker community feel about Wall’s myths, 3. general thoughts on how the media interacts with the hacker community. Work conducted by interviews of self-identified hackers at HOPE X.
Quinn starts on the first objective. What makes someone a hacker? Plays some interviews from HOPE X. Every hacker had a unique story, but there were common threads. Many hackers felt they had been hackers from a young age. There was also a common thread of hacking bringing empowerment.
First myth: cyberspace is inherently unsafe. Mixed response from interviewees. Some felt that the internet was “just a pipe,” while others believed that as a human system, it empowered misuse.
Second myth: the super hacker. Interviewees believed that no one person knows everything. Highly unrealistic. Real people just aren’t as interesting.
Third myth: cybercrime is dramatic, despotic, and futuristic.
Fourth myth: hackers are becoming part of organized crime. Many felt there was some truth to this.
Fifth myth: criminals are anonymous and cannot be tracked. Everybody uses handles, but you have to go through a single IP. Everyone leaves a trace. The only way not to get caught is to not do anything worth getting caught over.
Sixth myth: cybercriminals go unpunished and get away with crime: Law enforcement is making examples out of people in the hacker community. You have to be careful about what you type into a terminal, assume it will be used against you in the worst possible light. Media portrayal translates into harsher prosecution for hackers than for violent criminals. The CFAA is abused by law enforcement, for example in the Aaron Swartz case.
Seventh myth: users are weak and not able to protect themselves. Media focuses too much on companies and not enough on the users whose data is released. People are becoming less scared of technology.
Quinn moves on to common themes. One was a common objection to the way the word “hacker” is used in the media. The word is ambiguous, but has been used by the media in a much narrower and disagreeable way.
Women are underrepresented in the hacker community, even though the community tends to be more liberal. Women in technology sometimes aren’t presented as hackers because they don’t fit existing media stereotypes.
Hackers are portrayed as geeky, nerdy, boring, caucasian males. But there are hackers of all races. Sometimes hackers of color are presented as a vague formless threat.
Hackers are presented in a polarized way: either losers or dangerous and powerful. Women in the media are always attractive and sexualized and/or goth.
The media sometimes focuses on the hack, and sometimes on the hacker. It’s easier to get information on a hack, and easier to twist the facts to suit the story the media or company want to convey. The media likes to have a face and a personality, but that’s rare.
How does the news interact with the hacker community? There needs to be better journalism. The media takes snippets that don’t give the full story, but shape public opinion.
What about movies? Most of what happens is inaccurate. “They’re not going to make a movie about people staring at computer screens,” it would be “boring as hell.” The movies and media don’t pay tribute to the social engineering aspect. It makes sense that Hollywood would do this, but less so that the news does.
What about the 1995 movie “Hackers”? It’s almost a parody. It’s old and outdated, even for when the movie was set. It’s fun, but not good or accurate. It stereotypes what hackers are. Almost everyone mentioned this movie, and loved it in a “it’s so bad it’s good” kind of way.
Over time, the hacker stereotype has gone from the teenage prankster in the 80s, to dangerous cyber-warriors or weird, young, and soon-to-be-rich.
Hackers are careful about using the word because they’re sensitive to how people will perceive it. They described a strong personal impact due to the media portrayal. But the community remains strong.
What kind of movie would hackers make about hackers? It would be fun to flip all the stereotypes. Most hacking is boring from the outside, so it would need to be dramatized, but it could be more realistic.
Anshul: interested in more information on race, gender and age. Also wants to know how the hacking community is influenced by media stereotypes. Will be publishing this work soon.