Community Graphs

We need to talk about the ‘C’ word getting thrown around lately.

Yes, I’m talking about Collusion.

In any system which aims to hold up certain values above money, it gives rise to a monetary incentive to skirt the rules.

For example, think of governance and justice systems. Collusion within these systems (buying/selling votes, bribing judges, and paying off dirty cops) is a problem as ancient as the systems themselves. Consequently, these systems have had to find ways to discourage collusion with (often severe) penalties.

As we build out new decentralized systems of governance, we have to develop tools to discourage collusion, or mechanisms which rely on personhood, such as online voting and quadratic funding, will be easily corrupted.

Sybil Resistance vs. Collusion Resistance

What does personhood have to do with collusion?

Well, a lot. Within the context of online voting, creating a second (fake) account to cast two votes instead of one yields the same result as asking a friend to create an account and cast that second vote for you.

If creating a fake account becomes too difficult, we can be certain that people will resort to putting pressure on or paying others to be their ‘puppets’.

Last week, Puja (@pujaohlhaver), Paula (@_paulaberman), and Mikhail (@cryptoidentity) published an amazing in-depth case study of a project called Idena, a Proof of Personhood Protocol that experienced exactly this kind of pain.

We should all be extremely grateful to Mikhail and Idena for participating in this post-mortem, and I would highly recommend reading the entire paper; it’s super insightful and thought-provoking, both in its analysis and its discussion.

What the authors drive home is that sybil-resistance and collusion-resistance are really two sides of the same coin. If you incentivize sybil attacks, but you only solve for personhood, people will simply start colluding.

Idena personhood mechanism required users to perform recurring captcha-puzzle-solving ceremonies to prove, and maintain, their unique personhood. Maybe not the most elegant UX/UI, but they solve the sybil problem well enough that with minimal exceptions, every Idena account was a real person with only a single account.

The protocol also had a UBI token that was distributed to all account-holders (persons), making it perfect for a case study, as there was a monetary incentive game the system.

What transpired is extremely illuminating.

People didn’t spend much time trying to create multiple fake accounts. Instead, the path of least resistance was to recruit more people to create accounts.

For example, a Russian company hired a bunch of people in low-wage countries to perform the personhood ceremonies, but never gave them the private keys to their own accounts, and pocketed the difference between the UBI token earnings and the hourly wages.

Other versions involved set-ups where the workers technically had access to their private keys, but they didn’t possess the know-how to do anything with them. In one scenario, a company hired a bunch of children in Egypt as their puppets to perform the personhood ceremonies.

Perhaps another lesson here, is to start small, test things out, and run lots of experiments in low-stakes environments before scaling-out anything that might be financialized.

Collusion breaks Quadratic Funding

A mechanism particularly vulnerable to collusion is Quadratic Funding (QF).

QF is a special type of voting mechanism in which people vote with their money (contributions) to decide how much public goods funding various projects should receive.

In QF, the intention is for funding to scale with the square root of the contribution, similar to Quadratic Voting (QV), where the intention is for the strength of the vote to scale with the square root of the voting points (sometimes called voice points). The idea is that you can express a strong preference by putting more of your voting points (in QF, you vote with money) toward the thing you feel strongly about, but with diminishing returns.

The math behind QF describes a very elegant way to allocate a pool of money between projects, but it’s based on a set of assumptions around why somebody would be contributing which do not include outside influences which may be acting on the contributors.

But what we observe is that those submitting projects for funding will often try to influence others to contribute to their project. And they often don’t even feel like they are doing anything wrong in the process. It’s not at all intuitive why encouraging donations to your public goods project would be problematic.

But if enough people are successfully influenced, QF doesn’t deliver on its goals of allocating public funding toward the greatest positive impact. Even if all the projects are well-intentioned, it will misallocate funds toward projects which with the widest audience reach or most political sway. But it also attracts fraudulent projects to enter the game, risking funneling public goods funds toward scammers.

Anti-Collusion Mechanisms

What makes QF especially vulnerable to collusion attacks is its monetary construct. You can vote with money to receive more money, and money is fungible / easily transferable. If we can make QF collusion-resistant, less-vulnerable systems can then adopt that approach knowing it has been stress-tested, at least in some ways, under more demanding conditions.

Various anti-collusion mechanisms for QF have been proposed which use correlations of people’s votes / contributions or correlations in their social connections to probabilistically derive collusion and modify the QF matching accordingly. For example, if we can algorithmically detect that two people are often voting together, then we can deduce that they are probably colluding, and discount the funds matching that their contributions receive. Or maybe we have insight into contributors’ social ties and connections and we can impute a strong social connection to mean that they are more likely to collude, and adjust the matching results similarly.

The limitation that these mechanisms share is that they rely on correlation, but correlation does not imply collusion (just as it does not imply causation).

We can illustrate this point in even a simple voting system.

Every day, Bob, Alice and Sarah vote on what to eat for lunch together.

Their choices are:

  1. Pizza-to-the-Moon (Italian)
  2. Cheddar Jack (Burgers)
  3. Tokyo Palace (Sushi).

Over time, we observe that Bob and Alice are highly correlated in their voting. They each vote for Pizza-to-the-Moon much more often than the other two options, and there’s a high frequency of voting the same way on the same days. On the other hand, Sarah seems to be about evenly split over time, and not correlated with Bob or Alice.

And suppose we also know some information about Bob and Alice’s social ties, that they know each other and have a lot of friends in common, while Sarah is further removed from both of them on the social graph.

Given this data set, the philosophy of the anti-collusion mechanisms would be to discount Bob’s and Alice’s votes on the grounds that they are likely colluding.

In reality, they may very well be colluding. Or they may be voting completely independently.

Consider two sets of scenarios.

Scenarios of set A: Collusion

  • Bob is bribing Alice to vote a certain way, or vice versa
  • Pizza-to-the-Moon is bribing one or both of them
  • Bob frequently exerts social pressure on Alice to vote with him for Pizza-to-the-Moon

Scenarios of set B: No Collusion

  • Pizza-to-the-Moon just happens to make really great food, and the other two restaurants make crappy food. And Sarah happens to not care much about the food quality (we all have that friend), she only cares about the ambiance, and all three restaurants have equally satisfying ambiance
  • Bob and Alice happen to prefer Italian food to American and Japanese food, while Sarah likes them all about the same
  • Bob and Alice go hiking together all the time and after a long hike they feel like eating carbs, so they are in the mood for pizza, resulting in them voting for pizza on many coinciding days. Sarah doesn’t have the same carb cravings, so she’s equally okay with burgers or sushi every time she votes

In both sets of scenarios, the data available to the algorithm(s) would be the same, so there’s no way for them to discern whether collusion is truly happening. To rule out collusion, they would also need visibility into the participants’ food tastes and hiking habits, the restaurants’ food quality, Sarah’s preference for ambiance, and a million other factors which might give rise to alternate explanations. Putting all this data on-line would come with tremendous barriers, including huge technical costs, impracticality, and very serious privacy concerns.

While an example involving the QF anti-collusion mechanisms would be a bit more complex to illustrate, the same limitations apply. Any algorithm which discounts the strength of contributions based on correlation assumes collusion where there might not be any, shifting the matching funds from more deserving to less deserving projects.

Should we keep using QF?

The most beautiful thing about QF is that going through the QF rounds is a forcing function to get people to actually think about doing good, what they care about, how to make the biggest impact.

That psychological shift is a huge positive impact that is not accounted for anywhere in the math, but probably exceeds all the other impacts by at least one order of magnitude, in terms of importance.

So even though we’re often not achieving optimal QF results today, it’s a huge net positive and we should do more of it, all the while we work to improve it via better collusion-resistance.

What is colluding within QF?

This all begs the question, what exactly do we mean by collusion?

If we’re going to fight it, we should first define it.

Within a legal context, it has a pretty clear definition: to collude is to coordinate secretly to get around certain laws or rules. For example, when competing businesses conspire to fix prices.

But outside of the legal field, and in the absence of clearly spelled-out rules, what does it mean to collude?

Well, in those contexts, which includes ours, the word becomes a negative connotation.

Collusion is the evil twin of coordination, just like manipulation is the evil twin of leadership.

If Bob is influencing people toward an outcome which is deemed self-serving, that’s manipulation. If he’s influencing people toward a group-benefiting goal, we call that good leadership.

In the same vein, if people are working together toward a positively-perceived outcome, we say they are coordinating. But if they’re doing it toward a self-serving outcome which benefits their smaller group at the expense of the larger group, then we call it collusion.

Just like leadership and manipulation, coordination and collusion are subjective and contextual.

What is colluding within QF?

Within the contexts of online voting and QF, colluding means coordinating in a way which doesn’t express the values of the community within which that voting / contributing is taking place.

A collusion-free QF round requires that everyone behaves in a way which makes the greatest positive impact, not in a way which is specific to any one project.

Doing favors or asking for favors, advertising a specific project, and so on, is not aligned with the QF mechanism because it leads to a suboptimal allocation. It’s perfectly okay, and good, to encourage more contributions, but all contributors should be encouraged to inform themselves about every project in that round and to make an independent decision regarding which project(s) to support. In general, it’s important to set guidelines around how everyone in the round should behave, and why.

Anti-Collusion requires Social Coordination

What gives us superpowers as a species is our ability to coordinate; it’s an ability we evolved well before we invented money, using it to take down larger prey or ward off attacking tribes.

We’ve even figured out how to coordinate around completely arbitrary rules, such as in sports. Soccer is a beautiful activity as long as the rules are followed. But if players start using their hands, or if several players collude to switch sides from Team A to Team B, then they will almost certainly win the game.

So why is it that the rules are generally followed despite a strong incentive to break them?

Because when we play soccer, we are playing one game with arbitrary rules inside of a repeating larger game. Within that repeat game, there is a personal ‘score’ that carries across from one smaller game to the other, and stays with us, fluctuating based on how we behave within the smaller games. We just don’t think of it as a score. We call it Reputation.

Reputation is the Key

Reputation can take many forms, including the reputation to abide by a set of rules and to not collude.

If we can find a way to represent our reputations online, and allow them to be impacted by how we behave, we can form a powerful defense against collusion.

And by doing so, we eliminate the need to bring a ton of real-world data on-line. Any attempt to collude will involve other people, and its people who determine our reputations. Whether we are colluding will always be subjective and specific to a community, and members of that community will be best positioned to make the judgement call based on our behaviors within that context.

Reputations vary across Groups

Just as in the physical world, our reputations should carry across time, and across different communities. That way, people will be well-guarded of their reputations and it’ll seldom be the case that colluding for a QF round is worth risking their reputation over. And for those who do, their reputations will degrade over time, signaling to communities to either give them less of a voice or simply not even allow them in.

When we join any community, we agree to abide by their rules. If we fail to follow those rules, we risk our reputations within that community, and across other communities, based on overlapping memberships across communities.

Representing Communities Digitally

But what's a community?

What constitutes a community, and who is a member of any given community, and to what extent – these things are all also subjective and contextual.

Sometimes communities are clearly defined, but more often, they are much more loosely organized. If we surveyed community members, asking them how much, on a scale of 1 to 10 is everyone else a member of that community, we’d get varying answers. Alice might give Bob a 6 rating, while John might give Bob a 9 rating. We don’t generally reach consensus on these things, nor should we need to. But we do hold these subjective assessments in our minds as we engage with one another, and they inform our relationships and behaviors.

To more accurately represent our community memberships digitally, those memberships should be on a spectrum instead of binary, and they should be subjective based on the inputs from community members. The inputs should also be fully private or else people won’t give honest assessments.

In this way, communities can emerge in a decentralized and permissionless manner, and governed by their own members.

Community-specific Reputations

Representing communities in this manner then opens the door to having reputation systems which exist within and across communities.

When we join a community, we might not have any reputation built up with that community yet, but we bring in our reputation from other communities as a starting point. This makes it so we don’t have to start from scratch every time.

And similarly, the way we conduct ourselves within that new community impacts our reputation in that community as well as more broadly.

Community Graphs

We’re building the substrate infrastructure to enable all of this, which we’re calling simply Community Graphs.

It will be a fully open, decentralized, and privacy-preserving protocol to allow us to experiment with many different community governance mechanisms, voting algorithms, and reputation systems (including the reputation to not collude).

And it will be modular, composable and interoperable with other  identity and reputation systems, so that communities can combine them for an even stronger or more reliable signal.

Social media applications can also plug into Community Graphs and provide features and tools to serve various communities. And those apps can interoperate with one another via the Maitri Network to combine network effects and offer their users a richer experience.

To borrow from David (@TrustlessState), it’ll be Legos. But instead of Money Legos, it will be Social Coordination Legos.