BY the engine room | Thursday, July 17 2014
This post is part of the live coverage of the 2014 Open Knowledge Festival in Berlin by the engine room. For more information you can read the intro post, follow the liveblog, read the flash interviews and follow us on Twitter.
The last update of our live blogging is up: we are happy to announce that we are now in the afterglow of an Open Knowledge Festival jam-packed with insightful creations, deep (and deeply important) conversations, and ideas for the future of open data. We wanted to thank all the people who helped keep the live blog up to date, contributed information about sessions, and shared insightful thoughts on their projects and work around the world.
Special thanks to the OKFN team for making the event happen, and for WeGov for giving us the space to capture reflections about the event on their site in real time. It has been a real pleasure.
Update July 17, 5PM CET
School of Data, Peer to Peer University and Open Tech School organized a world-cafe' style workshop to share their experiences in designing and conducting training processes, online and offline.
The areas covered were:
- How to organise tech and data workshops
- Building effective curriculum and accreditation
- Type of education activities: a blended offline, online
- Designing passion driven communities
How to organise tech and data workshops
Robert Lehmann from Open Tech School and Emma Prest from DataKind UK shared key lessons learned about the process of organization and setup of tech workshops. In a perfect world, the dream ratio of trainers to participants should be 1:3, to realy make sure eveyr participant is followed and heard. It is important to actively go and ask how participants are getting on, without waiting for questions. Telling stories explains what they are going to learn, what they have accomplished and how it fits into a bigger picture of their work. The concept of "parking lot" was discussed - a place to put thoughts and questions not included in the agenda or curriculum to come back to at the end, or for participants to discuss between themselves after the sessions, or take back home as points of insight. It is also important to make sure participants leave the workshop knowing what’s possible and where to look for help, and are armed with a problem-solving mentality.
The panelists also raised some flags, or things to be aware of: it is important not to run "installation parties" because they remove the momentum and don't bring enough participatory learning in the room. It is also important to remember that feedback unfortunately won't be completely honest. Ultimately, it is important to adapt and make the material relevant to the participants.
Building effective curriculum and accreditation
Juan Manuela Casanueva from SocialTIC expanded on their lessons and experiences with working and hacking the School of Data curriculum. They learned three things:
- know who your audience is, what activities they do and what they need to learn, and what skills they have
- training is process, not syllabus - expand training into an experience
- Help people participate in different ecosystems
They try as much as possible to blend practice and tools: asking people, "What does this data tell you?" is good, but it is best to use the group's everyday data. Helping the participants work with practical know-hows on their own data, like scraping, pivoting or filtering, helps build skills that the participants will be able to use in their everyday work.
Ignasi Alcalde from IA Knowledge management also shared his main lessons learned from his experience:
- don't leave people alone between workshop. Build online processes.
- let people learn by doing - let them crash, learn from failure
- adapt datasets for your audience
- learn participants' skills beforehand (through surveys), make sure there's a mix of skills in a group
- think about a path for your audience: badges, propedeutics
Type of education activities: a blended offline, online experience
In Brazil, the Scola de Dados (brazilian chapter of the School of Data) wsnted to create a blended online and offline experience for people who didn't have reliable acceess to internet. The two chosen topics were introduction to programming, and introduction to data journalism.
They decided to take advantage of the MOOC, while providing participants with weekly live check-ins, to help combat the main problem of MOOCs, i.e. people dropping out because of lack of motivation. It shows people the course is really happening, and helps participants follow through: engagement with participants is crucial.
Escuela de datos (the Spanish chapter) organizes 4 types of training activities, on data-driven journalism and open data, that can be
- 3-day events,
- monthly sessions of 3 hours,
- data expeditions and
- production workshops.
An example they presented was a data expedition on energy. While digging into data about energy, politics, sustainability and the huge plethora of data, it showed that if you choose a theme that you don't know anything about (public transoirt in China for example) it will make for an unsuccessful experience. It is important to take a dataset that is familiar to the participants, and is specifically tailored to the profiles of people in the room.
Designing passion driven communities
Heather Leson and Rebecca Kahn shared their experiences in building communities, and helping to create fruitful and meaningful experiences for individuals through the power of social connections.
The first lesson of the table is that learning individually doesn't work, or at least not as well as within a group. By creating a social wrapper, a peer grop, much more meaningful learning experiences can be built, and are the main reason behind trying to create a community. That perspective gives the opportunity to start designing learning experiences for communities.
Creating community-based learninf processes is also key for creating a healthy individual process, so that participants can also contribute while they are learning. Everyone comes to a community with different needs, style and asks: some learn by making, and others by teaching. It is important to include systems for defining and including different experiential styles in a community. Groups can also be sub-divided per skill, or other aspects (like favorite music group) that makes for a different social cohesion.
One important point presented was the importance of creating a safe and comfortable space for participants, that will give them the opportunity to share their lack of skills in a safe and inclusive way, and give participants the opportunity to get deeply involved in a learning process that is tailored to them.
The concept of defining learning partners resonated particularly: pairing up with another person helps keep check-ins and progress in line, and it also helps when coping with emotionally difficult themes, such as datasets of death and immigration in Europe.
Update July 17, 3PM CET
Neelie Kroes, the Vicepresident of the European Commission responsible for the Digital Agenda gave a stirring talk about the role of open data in the EC’s thinking about access to information across borders. She is on the way out of her digital post and cited the need for an entriely new mindset at the EC and a generational shift in thinking about the role of openness in innovation, transparency, and fairness. She was also one of the first speakers to lay out a theory of change in open data:
Being open means better information, which means better decisions, which means better government.
And in an ironic twist, Kroes called for an open internet (to much applause) for all in Europe, while her speech was blocked on the livestream of OKFest for those with an IP address from Germany.
The second keynote session saw Eric Hysen from Google expanding on the tech giant's vision for openness. Hysen compared our current situation with the development of turnpikes in the UK from the beginnings of last century, where horse-drawn carriages were capable of traveling in a relatively fast manner, but were slowed down by the terrible state of the roads. The UK turnpike development project improved the roads, introduced milestones and set up a system of tolls that managed to drastically reduce travel times. The parallel with today is open access and open source, and how improving access mechanisms will improve the entire ecosystem of open.
In his keynote, Hysen made thre statements for improving the concept of "open":
- Open isn't enough: there is a need for projects to be structured, updated and licensed correctly, in order to support sustainability and profitability
- Interoperable data: interoperability is key. It is far too hard to connect different types of data together, and it is crucial to be able to do so in order to scale efficiently. An example of interoperability is the wifi protocol, which permits us all to connect to the internet and between devices, without looking at type, model or producer.
- Ecosystem, not apps With a slightly provocative statement that killer apps don't exist solely on top of open data, Hysen made the case that ecosystems are crucial for success because they consider the community and the larger picture and ensures connections between the project and the context in which it is being built.
Jonathon Morgan from CrisisNET presented the project, through a quick intro and a demo, followed by Q&A and group discussions.
CRISISnet collects disparate information from various sources and combines them into a single data pipeline for leveraging in visualizations and aggregates. It can then extract knowledge into a map, and let users filter and visualize data, as well as creating their own API to use with their projects. Some of the sources of information are Ushahidi, Facebook, Twitter, Instagram, and ReliefWeb.
As initial use case, CrisisNET collaborated with Elliot Higgins, a crowdsourced data journalist who is an expert in collecting enormous amounts of data from social networks and other online sources, and then analyzing them to provide journalists with investigation threads or credible evidence. CrisisNET pulled all the content that Higgins collected on Syria and inserted it into the system, which is publicly available on their website.
Unlike other apps, CrisisNET only gives out data, builds plumbing. In Jonathon's words:
We don't care about what you do with your data, we just provide the API. We like to have the information available and let the users choose what is reliable and what is not
Some very insightful questions came from the room, especially about reliability of sources, and strength of methodology behind the process. While it is still a very young project, CrisisNET is thinking about how to implement verification checks. Also, the entire platform is open on GitHub and developers are encouraged to improve upon it, use it offline and modify it for their own uses.
The final discussions were held around some of the most important questions that any data collection project needs to look into, like what barriers for data availability exist, how to find narratives inside a dataset and how data generators (the communities) can actually make use of that data. The room provided insights like knowing the audience, ensuring that data collection is actually useful and usable, and involving communities in participatory processes.
Governments are increasingly opening data, but so what? Who's using it? And how? The Open Data Research Network has 17 research projects looking who's using open data and, crucially, who's acting as intermediaries and what they're up to. During this session three of these projects spoke about their research and we delved further into the different kinds of intermediaries.
What do we mean by intermediary? They are the people or organisation who act as the link between the supply of data and the users of data. This role can go beyond usual suspects and includes local sources of information such as traditional chiefs. The intermediairies are not the users of the data, but rather the ones who add value to the data by analysing, curating and presenting it.
They are powerful players, able to shape the supply of data. Intermediaries can act as the suppliers of data themselves as they often collect a lot of data that governments can't reach and is not accessible to citizens.
The South Africa project looked at whether data being supplied was being used public higher education institutions.
They found that the providers of data were department of education, as well as the universities which supply individual data.The intermediaries were an NGO which was repackaging that data as indicators and a private company getting data from same source and creating data dashboards which the universities paid for.
Transparent Chennai, an Indian NGO, looked at the quality of government data. They found it was patchy and inconsistent and couldn't be used by governments for planning, nor by citizens to hold government to account.
In Nigeria, they looked at how budget data was being used and find out how citizens engaged with it. The intermediaries in this case three NGOS: Budget Nigeria, which created web-based visualisations of budget data; Follow the Money which added geolocation to the budget data to map it; and the Center for Social Justice summarised data visually and disseminated it in print for communities without internet access.
During the session we broke down the intermediaries into NGOs, community groups, government, media and private sector. It quickly became clear that relationship and functions of the players involved couldn't be broken down into data providers, data intermediaries and data users. As is often the case - the real world is messy, complex and not easily categorised.
This session was designed to unpack the oft used word associated with data projects: participation. The session, facilitated by Linda Raftree, included presentations from three different activists who had examples of unintended dynamics of participation.
Nancy Schwartzman from @circleof6app discussed the power dynamics in personal relationships in which many women and girls around the world are constantly surveilled. This undermines their ability to take part in seemingly open dialogues. She also addressed the case studies of mapping harassment and how it has been done well (and with careful curation and participation of users) and how it has been done badly (throwing a map up on the internet and assuming that women will feel comfortable to report incidents of harassment).
Lina Srivastava addressed the explicit irony that their is no data on undocumented migrants. Where there is no data it is difficult to use analysis of trends to make decisions and call for change. She also highlighted the way that competing narratives can result in stalled change and sometimes regression. Citing the case of data on immigrants ofn the US Mexican border, she pointed out that the new numbers showing there are fewer immigrants illegally entering the US, but more are dying at the border is considered a success by conservatives.
Simeon Oriko presented a few great examples about mapping projects in Kenya that didn’t take into account realities in the country. For a mapping project designed to document crime in the slums, there was little consideration of the fact that no one wants to report crime with their smart phone - their smart phone will just get stolen. He also made the obvious, but fantastic point that when a hospital runs out of medicine, the last thing a patient wants to do is to whip out their smartphone and report that.
The session then broke into a few groups to tackle concrete, crowd-sourced problems that participation projects have had and found that issues of informed consent, communicating and recruiting for diversity and inclusion, and longterm learning about how to widen the circle of participation were common across all projects.
In this session, the facilitators Zara Rahman, Gabriela Rodríguez and Marcos Vanetta presented the concept of the data pipeline, and how to find the right tools for each part of the pipeline. The session was inspired by the data handbook, with a goal to build an OKFest toolbox, sourced from the participants themselves.
The data pipeline is defined through the following steps:
- Acquire data
- Extract data
- Clean data
- Analyze data
- Present data
And the participants were split into groups to think about which tools are the most useful for the specific steps. Through this very hands-on session, a number of specific steps were developed, and documented on the session's online collaborative pad, accessible here.
Update July 17, 10AM CET
Good morning from the OKFest! This morning we will be following the live keynotes from Neelie Kroes
Morning start and keynotes (10:00 - 12:00)
Sessions and panels
We will also be live tweeting these session on @engnroom, with hashtag #OKFest14.
We Want You
Are you in or around OKFest? Are you meeting interesting and insightful people, having your mind blown by incredible new projects? Tweet us @engnroom or send us an email on firstname.lastname@example.org.
Update July 16, 7PM CET
This session, co-organized by the engine room team, was an open conversation about concrete cases where opening data has gone wrong. The session started with a presentation of examples to start conversations about where open data has gone wrong, through actual practical use cases of people who either worked on the projects or have an intimate knowledge of them. Following this, we asked the participants to share their horror stories from open data gone wrong, and ask questions to the panelists. The session was held under the Chatham House rule, so names of participants will be omitted rom this writeup. The panelists agreed to share their names publicly.
Janet Gunter from the Restart Project shared horror stories from the land data field. Janet mentioned riding white trucks into development areas as conveying of power and disturbing in different scenarios - data sets and data can be like Lexuses in developing countries. Janet talked about the Bhoomi case study in Bangalore. Publshed in 2007, it is an ethnographic research of what happened when indian authorities in Bangalore decided to digitize 20 million land records. IT companies wanted to get this data, aimed to remove "messy" politics and deal directly with outside investors. As a result, the land process became more corrupt, and people were exposed to more risks, with land speculators and privileged access to data.
when we open data sets, think beyond the "net positive", who is empowered by this activity
Friedhelm Weinberg from HURIDOCS presented their work with opendata.ge, which - contray to the name - is not open data, but gathering FOI requests from 4 major Georgian NGOs. The goal of the website is to make FOI data mroe transparent, but also to pool information to ensure that same responses are given and cross-referenced. Which public instutuions are always late, or on time? Is there disparity between these agencies?
The main problem here is aggregation of this data: Comparing perfomance and categories is the logical way forqard, but there is a fundamental problem: even NGOS don't agree on what a FOI request is.
There was no agreement on what to use as performance indicators, and requests were different. But, you have to already work with those doing this work.
Reuben Binns spoke about the UK national health service, where citizens first go to general practioners, and then get refered to specialists. In 2012, the Health and Social Care Information Center decided to aggregate this data from GPs. The data was divided in red and green. Red was PII but would be stored on their servers and not published, while green was safe to share. There was also "amber": "soft anonymisation", where identifiable information was replaced by "code" (pseudonymous) and linked to hospital visists throughout datasets.
Amber data was not processed with best practices for this kind of anonymisation. It was occasionally also uploaded to Google for big data analysis by private firms, or uploaded to Earthware and was publicly searchable until they realised about the mistake.
Use of "open data" is a label which doesn't apply to pseudonymous data which is sold for a profit; the Open Data community needs to be careful about how the terminology is used.
Mushon Zer-Aviv from the Public Knowledge Workshop, a volunteer based NGO, with more than 100 volunteerss working on different projects.
We are the Don Quixote of israeli democracy - fighting a lot of windmills
In 2010, the Israeli government decided to open their budget. Unfortunately, it was uploaded in a very complex fashion, but still celebrated as government transparency. So the Public Knowledge Workshop developed a new way to visualize that data that is much more accessible by the public and has actually been used by the parliament as a visual table of contents. So while this website was wildly popular, they realized that it missed 10% of the data: in fact, up to 10% of the budget can be reshuffled after it signed off and the website doesn't capture that information. So in order to fix this, they are currently working on a new website that can be more communicative and present a full context of what's happening in these transactions. Trying to work to make budget transparent by design, and find themselves in a position to discuss budget transparency.
Now what? we are software devs, not financial analysts. who are we to define discussions on budget? we have a bias toward things that we cannot quantify. If it can't be quantified, it doesn't mean it's not important. Not everybody can know everything to do a dataset.
After the panelists, the room started sharing their own stories of open data gone wrong, in a surprisingly rich and active fashion. While we won't mention names or specifics, some use cases discussed are an example of government policy being made into law without any notice to the person who actually wrote the initial brief; a science story trending on Reddit about how "farts can change DNA and cure cancer", calling upon a research paper that says nothing of the kind; examples of stories in which opening data can atually be fatal to investigatvie journalists; reminders on the inability of tech to fix everything; creating of mapped data for a big international agency where large commercial mapping companies closed off what was intially open data; and more.
Sometimes it’s the most out of place sessions that trigger the most thinking and learning. The Restart Project, an outfit based in London, is pushing for a rethinking of how the “global north” uses electronics. All of the tech innovation and exponential growth of computational process discussed at events like OKFest requires hardware. Hardware gets old, and it gets old fast. And in the global north we throw away devices as fast as we can upgrade them, racing to distance ourselves from yesterday’s technology. The Restart Project encourages learning about how the devices we depend on actually work. They also encourage shifting attitudes about fixing culture. How can we learn from peers in the global south and start fixing things instead of throwing them away? And what economies and geo-political evils do we support by buying without thinking?
To spark this discussion, the Restart Project brought dead equipment and encouraged participants to dissect it. No expertise required. After the body parts of printers and computers were splayed on the table, they led a discussion about what kinds of elements and parts these devices are made of.
To motivate data brains in the room, Restart also facilitated discussion seeded by participant inspiration. Based on this hardware and discussion about the materials inside of them, they asked what investigations would make sense to pursue.
If you have ideas about the kinds of things data can do to shine a light on the price the world pays for accelerated development of new technologies (and bad design and disposal practices) tweet with the #OKFestEE hashtag and follow @RestartProject if you want to learn more about their work!
This session was sponsored by Artists without a Cause and led by African musicians Juliani frm Kenya and Valsero from Camroon, and coordinated by grassroots activist Sasha Kinney. The goal of the session was to present music as a powerful alternative to tell data stories, and combine different participatory methods to build a shared message. Valsero presented his take on the power of music as vehicle for emotions, especially when using data as basis for the creatio of the message.
In our culture, music is the way to express everything. Music is everywhere. You cry, you laugh with music. You want to use music to change the world.
Thanks to its emotional instinctive power, music can be a very powerful alternative to the usual suspects in data-driven advocacy such as visualizations, data-driven journalism or campaigning by numbers. As an example, Valsero spoke about HIV data in his country, and how 5% HIV-positive population can be translated, musically, in a much stronger message than just words on paper.
Musicians are seen as artists of engagement. They create word visualizations for people to absorb more easily. Their role is to create imagery that conveys emotional messages, and in that process they can move far away from actually naming the thing they are singing about.
To illustrate this concept, artist Juliani sourced ideas for creating the OKFest anthem from the participants, in order to build a list of concepts that are close to their hearts. As a second step, Juliani then asked participants to translate these concepts into imagery, such as:
- transparency for powerful, privacy for the weak - 2-way mirror
- CSV files - common language
- Free software - robin hood
- digital security - curtains in the bathroom
- hack the bank - turning the piggy bank on its head
After that, participants took turns to rap the words that express their imagery into a mic, recorded by the event's crew. Will we have a hip-hop, social, activist, open source, collaborative OKFest anthem soon?
Ingrid Burrington and Josh Begley, two US data artists, presented their concepts and thoughts on digging through concepts in an iterative way in order to find the specific aspect of information that can convey the most meaning to the spectator.
Through examples from Ai Wei Wei, González-Torres and others, Burrington made the case that art is very much about context and deep dives. Doing one thing and iterating over it allows a person to know more, starting with one dumb thing that makes them look at other pieces.
— Zara Rahman (@zararah) July 16, 2014
Josh Begley spoke about how his fascination with specific single things took him to a virtual trip all over the united states to uncover the geographic impact of the state prison system, which he then published on http://prisonmap.com/. Josh also told the story of Metadata+, an app that tracks US drone attacks. In his own words,
in terms of stupid things and doing them over and over, I think that describes every project I ever did.
Update July 16, 1PM CET
The Open Development Institute organized a hands-on, hour-long anonymization exercise using a dataset of Titanic passengers. Participants were encouraged to think critically about what constitutes direct and indirect identifiers, i.e. what data can directly identify a person, and what data can help identify a person if other information is already known. Some techniques the session covered are:
- normalization, or averaging a range of numbers so that real information is “hidden” (representing ticket prices through a 1-100 range instead of actual prices)
- removing outliers (the very recognizable edge cases)
- adding noise (randomly changing a range of numbers, while still keeping statistical validity)
- sampling (picking only a subset of the dataset)
- aggregation of data (showing only averages)
Throughout the exercise, it was clear that anonymization is an exercise in balance between richness of information and privacy protection, and it is necessary to gauge the level of sensitivity of data and anonymize accordingly. ODI’s suggestion to the group was that, should an org wish to publish aggregate data, they are probably in the clear. However, should they want to publish individual-level data, they should speak to an experienced analyst.
At the beginning of her session on how to define and design data journalism initiatives in the developing world, Eva Constantaras explains a key difference between media sectors in the developed and developing world:
In a lot of the countries we work in, the media environment and industry is doing just fine. There is no crisis yet, it’s coming but their print media has high circulation and is doing ok.
In that kind of environment, how can support organizations encourage innovation in data journalism? In the global north, innovation in data journalism has been led by scared media outlets who are innovating to protect against erosion of their slipping market share.
The session continued with a crowdsourcing of the challenges faced in the work of supporting data journalism in the developing world. Other challenges included:
- lack of availability of non-mundane data
- limited data literacy
- editorially data is difficult to use
- ego of editors
- fewer specialists
- profitability - few cases of that to point to (even in the “developed” world”)
- format: how do you tell data journalism over radio?
Small groups brainstormed successful approaches for support and a few highlights included:
- prioritizing long term engagement
- not getting too ambitious in terms of the number of different projects a data journalism support project can work on
- embedding data journalists but making their role in one project explicit
- prioritizing the needs of the audience over everything else
- not getting caught up in tech glitz
Have you seen other examples of effective data journalism support efforts in the global north or south? Let us know at email@example.com.
Update July 16, 12PM CET
The opening keynotes were very different, but highlighted the importance of the role of advocacy, investigation, coalition building, and other types of heavy lifting to not only generate open knowledge, but to do something with it.
Patrick Alley described several information chains in effective Global Witness campaigns and highlighted the need for more open data, more communication between investigators, targeted advocacy when using open data, and how shining the light on the intersections of states, companies, money, and power can lead to more openness. When this openness is combined with targeted advocacy, it can sometimes lead to change.
"We are losing our ability to translate our voices into meaningful actions when it matters" great point by @kenyapundit #OKFest14
— Anahi Ayala Iacucci (@anahi_ayala) July 16, 2014
Beatriz Busaniche focused on the challenges facing the open knowledge movement in intellectual rights. She made the case for using historical discussions about intellectual property to move past the current debate and into a more rights-based understanding and campaigning for freedom of information. Like Alley, she emphasized the prominence of on-the-ground advocacy and coalition building as key to gaining ground.
— OKFestival (@OKFestival) July 16, 2014
UPDATE July 16, 10AM CET
Good morning from Berlin, where the first official day of events is heating up.
These are the sessions we’ll be covering today:
Morning start and keynotes (09:00 - 11:00)
- Tools tools and more tools – Wall
- Keynote – Patrick Alley
- Keynote – Beatriz Busaniche
- Fireside Chat – Ory Okolloh
Sessions and panels
- Save the Titanic: Hands-on anonymisation and risk control of publishing open data
- Defining and Designing Successful Data Journalism Initiatives in Developing Countries
- Packaging your Message, Mobilizing the Public: Collaborative Music Production for Everyone
- E-waste exploration: Opening up our gadgets and looking into what happens when they die
We will also be live tweeting these session on @engnroom, with hashtag #OKFest14.
We Want You
Are you in or around OKFest? Are you meeting interesting and insightful people, having your mind blown by incredible new projects? Tweet us @engnroom or send us an email on firstname.lastname@example.org.
UPDATE July 15, 9PM CET
The 2014 Open Knowledge Festival has started!
In the picturesque setting of Kulturbrauerei, a dismissed brewery re-imagined as North Berlin’s cultural and creative hub, the Open Knowledge team kicked off the three-day event with an evening of sponsor presentations, an open faire and artists from around the world.
After presenting the main sponsors of the event, Google, Omidyar Network, Making All Voices Count and the Partnership for Open Data, the participants moved to the Faire space, to learn more about new and interesting projects in the Open Knowledge arena.
Some of the projects on show are:
- the Peer Library, an open source project that wishes to provide researchers with a collaborative platform where users can upload PDFs of scientific papers, and annotate them publicly or in private groups. The Peer Library sees the original document as seed for future insight and research, and hopes to provide a more balanced access to peer research for people with difficulties of accessing the formal academic peer research networks.
- Opening Closistan, a seed project by OKF Egypt to produce tools and tactics for opening data in countries where governments don’t wish to collaborate, or are downright hindering the open process.
- School of Data, an organization that works to empower civil society organizations, journalists and citizens with the skills they need to use data effectively.
- Open Access Button, a browser plugin that lets users report a paywall when trying to access a publication. The Open Access Button team then does their best to find ways to provide that information to users.
The Faire was moved to dance and perform thanks to artist exhibitions by csv soundsystem (electronic music inspired by and enmeshed with code, visualizations and data manipulation) and Politaoke, an event sponsored by Artists Without A Cause (AWAC), where participants perform karaoke using famous political speeches instead of songs.
We’re looking forward to tomorrow’s opening session with keynotes by Patric Alley, director of Global Witness, Beatriz Busaniche from Via Libre, and a fireside chat by Ory Okolloh from Omidyar Network.
This liveblog post is a dashboard of the engine room coverage of OKFestival 2014. We will be updating it throughout the next three days with snippets of coverage. If you want to submit information or photos/videos for inclusion, email us at email@example.com! Also follow us on twitter @engnroom and #OKFest14 to keep tabs on the event.
Personal Democracy Media is grateful to the Omidyar Network and the UN Foundation for their generous support of techPresident's WeGov section.
TechPresident partners with the engine room to surface and connect emerging tactics and initiatives.