This is Google's cache of https://github.com/fantasylandinst/fcop/issues/43. It is a snapshot of the page as it appeared on 15 May 2020 08:51:08 GMT. The current page could have changed in the meantime. Learn more.
Full versionText-only versionView source
Tip: To quickly find your search term on this page, press Ctrl+F or ⌘-F (Mac) and use the find bar.
Skip to content
New issue

Consider expanding scope of inactive participation violations #43

Closed
mjg59 opened this issue on 16 Feb 2017 · 29 comments

Comments

@mjg59
Copy link

@mjg59 mjg59 commented on 16 Feb 2017

If a member shames another member during inactive participation, they breach the code. If a member makes racist statements about another member during inactive participation, they do not.

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 16 Feb 2017

This is not true, but could be clarified further. Technically, there is no prohibition against shaming—only against shaming using information obtained while in the community. Members are free to shame other members during inactive participation so long as that information is not obtained within the community.

Does this address your feedback?

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 16 Feb 2017

No - it is still the case that shaming using material obtained within the community can be punished, but racism in inactive participation cannot be.

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 16 Feb 2017

It is not the shaming that is the issue but the shaming using information obtained during active participation. Much behavior that people engage in as part of their online identities is strictly prohibited during active participation (denigration, moralizing, etc.). Extending these prohibitions to inactive participation essentially means no one can ever participate in any FCOP community due to numerous violations—or at least, very few individuals, indeed. For this and other reasons, inactive participation violations are small in number and only relate to protecting members from possibly violent people (for example), protecting personal information from public exposure, and protecting members from professional sabotage.

Note also that if the community leaders believe any member would engage in any criminal activity (such as violence, spitting on someone, hate speech AKA "fighting words" etc.) they can pro-actively act to terminate their membership in the community. Finally, if someone has a reputation in another community, an FCOP community may take special precautions even if they believe the individual will not engage in any criminal activity.

Closing this ticket for possibly being out of scope, but feel free to open other ones if you find loopholes not covered by the above scenarios.

@jdegoes jdegoes closed this on 16 Feb 2017
@mjg59
Copy link
Author

@mjg59 mjg59 commented on 16 Feb 2017

So, to be clear, engaging in non-criminal racial abuse of a community member during inactive participation can at most be handled by the community taking special precautions, but shaming a member using information obtained during active participation is an explicit fcop violation? And this is considered to be working as intended?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 17 Feb 2017

So, to be clear, engaging in non-criminal racial abuse of a community member during inactive participation can at most be handled by the community taking special precautions

No, this is incorrect. If the community leaders believe the member would engage in violence or other criminal behavior (based on racial or other types of abuse exhibited in other communities), they can pro-actively prevent participation. If they do not believe the member would engage in any criminal behavior, then they can take (unspecified) precautions, but may not ban them.

shaming a member using information obtained during active participation is an explicit fcop violation?

Shaming or doxxing private information acquired during active participation is an explicit FCOP violation. FCOP attempts to make participation in the community safe (in the sense of being treated well and having an expectation of privacy)—not the world safe.

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 17 Feb 2017

Allowing a member to use racial slurs against another member during inactive participation but forbidding a member from being able to discuss another member's behaviour within the community feels like a very skewed set of priorities.

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 17 Feb 2017

If we expanded the denigration clause to include behavior during inactive participation, the end result is that (almost) no one would be allowed to participate in any FCOP community due to the behavior of individuals on social media.

As it is, judgement is left to community leaders to decide whether any case of denigration or harassment suggests a safety hazard or not. FCOP does not attempt to eliminate the need for human judgment.

Conversely, if we removed no doxxing / shaming during inactive participation, then it is equivalent to removing it for all times, because there is no difference (in impact on a member) between sharing personal details about a member during participation or waiting until after participation to share them.

@ankushnarula Do you have any thoughts here?

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 17 Feb 2017

I appreciate the reasons for your conclusion. Nevertheless, it remains the case that a member can racially abuse another member up to (but not beyond) the point of it being a criminal act as long as they do so during inactive participation without that automatically being an FCOP violation, but that a member may not share that another member engaged in racial abuse if that abuse occurred during active participation without automatically violating FCOP in the process. If this seems like a desirable outcome, then the issue is clearly settled - however, if that does not seem like a desirable outcome, it implies that some of your initial premises may be incorrect.

@ankushnarula
Copy link
Contributor

@ankushnarula ankushnarula commented on 17 Feb 2017

However, if we become aware of behavior in another community that we believe implies a member may not abide by the terms and conditions of FCOP in the community, then we reserve the right to take proactive measures to protect other members, including but not limited to warning members and instituting oversight or monitoring of the member's behavior in the community.

In principle, I agree with you @jdegoes. If member A is aggrieving member B outside the community, the community leaders will intervene if they believe the behavior will result in a violation of the FCOP.

But I can also see why the scope of remediation is too narrow for @mjg59 as it is written. As suggested in issue #44, some extreme but legal behavior during inactive participation can be problematic to smooth community function without violating the FCOP.

I realize this isn't easily measureable (yet) but here's what I would suggest...

Rather than expand the denigration clause, expand the discretion of the community leaders. They may address grievances beyond the FCOP emerging from inactive participation on a special case-by-case basis. However, these special cases should be strictly limited to "extreme cases" and "unpreventable interactions" occurring directly between members during inactive participation - online or offline.

In practice, "extreme cases" should rarely include one-off instances but the complaints should be logged and addressed at the community leaders' discretion. "Extreme cases" will most likely include patterns of bullying, direct/indirect threats, abuse, harassment, doxxing, etc. "Unpreventable interactions" means that the accused member has been requested to cease the aggrieving behavior and/or has been (physically/virtually) blocked but the accused member persists in the aggrieving behavior.

On the off-chance the volume of such claims overwhelms community leaders, they may delegate or altogether revise this process.

(Also, I have spitballed this together so by all means tear it apart.)

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 17 Feb 2017

An obvious concern is potential for abuse of the time of community leaders.

As one example, I have personally been harassed, insulted, etc., on Twitter, and for someone to assess the threats over literally thousands of tweets directed at me by dozens of individuals over several months would be a herculean task that could suck up a tremendous amount of a community's time. And that's just me. Multiply that by all the people in a community, or at least a significant number, and you need teams of volunteers sifting through mountains and mountains of evidence. Then you need to decide whether each classify as "extreme cases" or not, which would be very messy and most likely driven by the likability of the individual rather than any objective prediction for further disruption.

If the boundaries are not clear, you will end up looking through Facebook "likes" and Twitter "faves" for evidence of malfeasance. You need to decide if you will accept non-recorded interactions based on memory, and if so, whether witnesses will be required and how they will be tracked down.

Most communities cannot bear this burden or anything close.

That said, if there is some way to limit potential for abuse (both by people reporting violations, and by community leaders), as well as limit scope of work, then perhaps we can find some middle-ground.

I'll reopen for further discussion, as well as discuss internally.

@jdegoes jdegoes reopened this on 17 Feb 2017
@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 17 Feb 2017

In practice, "extreme cases" should rarely include one-off instances but the complaints should be logged and addressed at the community leaders' discretion. "Extreme cases" will most likely include patterns of bullying, direct/indirect threats, abuse, harassment, doxxing, etc. "Unpreventable interactions" means that the accused member has been requested to cease the aggrieving behavior and/or has been (physically/virtually) blocked but the accused member persists in the aggrieving behavior.

Very clearly thinking this through and spelling it out would be critical.

@EvanBurchard
Copy link

@EvanBurchard EvanBurchard commented on 17 Feb 2017

@jdegoes Why do you feel you have been singled out for shaming? Is this something that every community leader goes through? Is it just bad luck? Is there any way to address this massive horde other than ignoring them? Are some people trying to talk to you right now about how to not have a large group of people mad at you all the time?

Do you think that there's potential that a large group of people are not yelling, but also not engaging because they think there's something wrong with the community that they can't fix? Does that matter to you?

@throwaway1973
Copy link

@throwaway1973 throwaway1973 commented on 17 Feb 2017

Okay, I think the concrete use case here is: Alice meets Bob through an FCOP-governed community. During inactive participation Bob complains to Carol about Alice without reference to Alice's behavior in the community (race in this example), Carol and Bob then google-stalk Alice, get her personal information, and dox her without being in violation of the FCOP.

The fix for this seems to be expanding the scope of:

Be Civil. Obey the law, and do not sabotage the community or member's careers.

To simply:

Be Civil. Obey the law, and do not sabotage the community or its members.

Since while inactive doxxing of a member may not harm the person's career, it is still harmful. This may perhaps be too broad, but I'm failing to see a compelling counter-use-case.

@EvanBurchard as near as I can tell, he was singled out for shaming because the far-left in america has no sense of history / has forgotten that the same one sided rules they now seek were once the source of injustice that they (rightly) stood against. See for example the vietnam-era free speech movement, and the modern situation regarding hate speech. I think every community leader suffers an angry mob at one time or another, most fold to the pressure. It's to John's credit that he turned that into the FCOP. He somehow saw intuitively and instantly what took me another year of living under mob rule to figure out.

@EvanBurchard
Copy link

@EvanBurchard EvanBurchard commented on 17 Feb 2017

@throwaway1973 Who the hell needs to be told not to doxx people? "Inactive participation?" Seriously, I don't see the need for this complexity. And with all this complexity, it is still somehow not a "moral document."

You want to know how to figure out which tribe to back? Look at where the guns and firehoses and pepperspray are pointed. Kent state, "good" protesters in Berkeley, "bad" protesters in Berkeley.

"I am the lorax, I speak for the white supremacists, for they have only all 3 branches of the government and haven't squashed the media yet."

A year living under mob rule. I don't know what that means. I think you can be less cryptic since you have a throwaway.

@ankushnarula
Copy link
Contributor

@ankushnarula ankushnarula commented on 17 Feb 2017

@throwaway1973 Who the hell needs to be told not to doxx people?

This is a good question. The primary case is someone who is so morally self-assured that they are willing to defy common sense/decency to commit harmful acts to achieve an ends. There are varying degrees of this mentality with the most extreme being covered by common law. However, as ideological warfare grows this class of persons has grown in numbers across the Internet and within the physical world. The FCOP attempts to create a community that neutralizes this.

"Inactive participation?" Seriously, I don't see the need for this complexity.

That's fine. You can choose a different CoC to suit your own needs.

And with all this complexity, it is still somehow not a "moral document."

False dichotomy due to binary logic.

You want to know how to figure out which tribe to back?

Not particularly.

Look at where the guns and firehoses and pepperspray are pointed. Kent state, "good" protesters in Berkeley, "bad" protesters in Berkeley.

Not interested in your personal preferences.

"I am the lorax, I speak for the white supremacists, for they have only all 3 branches of the government and haven't squashed the media yet."

Seriously. This is not the forum for your personal political grievances.

@jdegoes jdegoes changed the title Shaming is punished more harshly than racism Consider expanding scope of inactive participation violations on 27 Feb 2017
@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 12 Mar 2017

@ankushnarula @throwaway1973 I currently don't see any way to extend the scope of inactive participation violations that is not guaranteed to lead to OpalGate-style abuses. My view is if someone can be trusted to follow the law (i.e. poses absolutely no safety hazard) and trusted to neither sabotage the community nor the member's professions, then they can be permitted to participate, possibly with additional precautions in place.

Note that most or all of the extreme behavior mentioned above is covered by existing state laws (stalking, cyberstalking, harassment, threats). Indeed, behavior close to the boundaries of such laws (but not technically in violation of them) could cause a community to refuse right of participation on grounds there is probable cause to believe those boundaries may be crossed.

If anyone thinks of exceptions to this and a way of curbing abuses against an expansion, then let's open that on a new ticket that's more focused than this one.

@jdegoes jdegoes closed this on 12 Mar 2017
@mjg59
Copy link
Author

@mjg59 mjg59 commented on 13 Mar 2017

Just to be sure I understand this, a hypothetical:

A and B are members of a community C. B is a member of racial minority X. A writes a book entitled "Members of X are intellectually inferior and should not be permitted to breed" and gives a number of widely publicised talks and interviews on the topic. A has never given any overt indication of their racist beliefs while involved in community C, and when asked by community leaders has made it clear that they keep these beliefs separate from their behaviour within the community.

My understanding is that B has no mechanism to protest A's involvement in the community, and that permitting A to be a part of community C does not demonstrate any corruption on the part of the community?

B writes a tweet in which they mention that A made an incorrect assertion about a corner case of a programming language during a private conversation at a conference run by community C. This causes some people to negatively reappraise their opinion of A's expertise in said programming language.

My understanding is that A can request an official resolution against B, and that this must be investigated?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 13 Mar 2017

@mjg59 Consider all of the following scenarios:

  • A is a literal murderer in the eyes of a conservative B (e.g. a doctor who provides abortions); but A does not evangelize their moral system in community C.
  • A believes that all those convicted of violent criminal offenses should be sterilized (including member B who has served jail time for a criminal offense), but does not evangelize their politics in community C.
  • A believes that killing and eating animals is morally equivalent to murder and cannibalism, and should be punished in the same way (in the context of meat-eater community member B), but does not evangelize their moral system in community C.

In all these cases, and the case that you mention, the mere act of possessing these beliefs (or openly discussing or evangelizing them in other communities) does not prevent member A from participation in the community, assuming said beliefs do not imply a safety hazard for other community members nor in any way violate laws.

FCOP communities are not the place to engage in ideological warfare, and this is not a bug, but a feature. Other COCs actively encourage such warfare and may be adopted by those so inclined.

This causes some people to negatively reappraise their opinion of A's expertise in said programming language. My understanding is that A can request an official resolution against B, and that this must be investigated?

If #76 passes (as I expect it to), then A would have no such recourse.

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 13 Mar 2017

And if potential employer D declines to employ A on the grounds that B's report indicates that A is clearly overstating their expertise in a language relevant to the job?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 13 Mar 2017

And if potential employer D declines to employ A on the grounds that B's report indicates that A is clearly overstating their expertise in a language relevant to the job?

See #77, which would prevent factual or constructive criticism from counting as "harm" in the definition of professional sabotage.

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 13 Mar 2017

Is "This person made an unprofessional joke" factual?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 13 Mar 2017

Is "This person made an unprofessional joke" factual?

Since #77 is really only a sketch of an idea at this point, rather than a concrete proposal, it's difficult to say whether "This person made an unprofessional joke" would qualify as harm.

My original thinking was that any factual, observational statement would be explicitly exempted from the definition of "harm", which would include the statement, "This person made an unprofessional joke," assuming such statement were indeed true and not a fabrication or a misunderstanding.

As another example, the statement that someone made "inappropriate sexual jokes about forking" in response to their stating something like, "I'd fork that guy's repository," would not necessarily be excluded; indeed, even a literal quote of something taken out of context that had negative effect's on a member's career may not be excluded, if it was misleading out of context.

I also think intent matters and that's why an arbiter's involvement is crucial, because someone who is being careless or reckless is in a different class from someone who intends to destroy someone's career, who is in a different class than someone providing constructive professional feedback.

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 13 Mar 2017

How would you determine whether such a statement were true and not a fabrication or misunderstanding without effectively going through an arbitration process?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 13 Mar 2017

How would you determine whether such a statement were true and not a fabrication or misunderstanding without effectively going through an arbitration process?

I don't think anything would be investigated unless (a) it had or seemed likely to have negative ramifications on a member's career, and (b) the member officially reported an incident. IOW it's only going through arbitration that such statements would be characterized as one thing or another.

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 13 Mar 2017

So, going back to my initial example: even with #76, if A asserts that professional harm has occurred as a result of B's statement, and that B took said statement out of context, A could request an official resolution against B and arbitration would result?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 13 Mar 2017

Yes, but in the same way I could arbitrarily claim that you doxxed me (when in fact you had not), report the incident, and request an official resolution. Both are frivolous claims (as is or should be clear from the COC), but in order to know that, an arbiter needs to take a look to become familiar with the facts of the situation.

@mjg59
Copy link
Author

@mjg59 mjg59 commented on 13 Mar 2017

If the conversation took place privately without other witnesses, it's not obviously a frivolous claim. But ok, let's take another example starting from the same place as above. During an event run by community C, A refers to B with a racial slur. A immediately apologises, and this is the first incident of its type in several years of A's involvement in the community. Before any arbitration has occurred, B informs A's employers of the (factually accurate) details of this incident and A is fired as a result. Would this be considered professional sabotage?

@jdegoes
Copy link
Contributor

@jdegoes jdegoes commented on 13 Mar 2017

@mjg59 Again, it's hard to say without #76 and #77 in place yet. As currently written with the very open-ended use of the word "harm", it could be considered professional sabotage, but once the definition of "harm" is locked down, that might change. In any case, I'd suggest #77 is a better place to discuss the definition of "harm" and / or limitations on the scope of such definitions.

@jacqueline-homan
Copy link

@jacqueline-homan jacqueline-homan commented on 14 Mar 2017

So, let me see if I understand the situation correctly regarding the order of events in what occurred that led to this mess:

  1. An unapologetic fascist, misogynist, racist, sexist individual (who blogged under the name "Bedbug" or something like that) who blogged incessantly promoting his views elsewhere, applied to give a technical talk at LambdaConf last year and @jdegoes and the rest of the LambdConf organizers did not know anything about this guy other than the technical topic he was applying to present and accepted his technical talk, and included him as one of the 80 or so speakers at LambdaConf 2015.

  2. Several members of the FP community pointed out that Bedbug (or whatever his name is) should not be given a platform due to his fascist, racist, sexist ideologies that he promoted elsewhere outside of the FP community.

  3. Several other members of the FP community pointed out that they did not want Bedbug to be no-platformed based only on his personal beliefs.

  4. @jdegoes asked only the female and non-white members of the LambdaConf community how they felt he should handle the situation with Bedbug and they said they weren't comfortable with the guy but were even more uncomfortable with the idea of setting a precedent for no-platforming technical speakers based solely on their activities outside of LambdaConf.

  5. People from both sides then gave @jdegoes their positions on the matter, neither side happy with the fact that he asked women and racial minorities (and only those two groups) within the LambdaConf community how they felt.

  6. @jdegoes made a judgment call, deciding to not un-invite bedbug, and the conference goes on. As a result, there is a lot of acrimony.

  7. It is now one year later and a code of conduct is in the works and, again, a whole lot of people from both sides aren't happy.

Did I miss anything so far?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
6 participants
You can’t perform that action at this time.