all 7 comments

[–]zyxzevn 5 insightful - 3 fun5 insightful - 2 fun6 insightful - 3 fun -  (3 children)

I think that you want to build some kind of trust-network.
Which is a difficult problem, because even those can infiltrated.

[–]JasonCarswell[S] 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 2 fun -  (2 children)

I don't think it's that difficult.

As long as there is skepticism and anticipation of infiltration with are checks and balances lead by rational minds and open discourse I have faith the trust-network should be possible. More importantly, critically IMO, is that branches of trust be allowed to grow. If people disagree, that's fine and perhaps reason for distrust - but it doesn't mean anyone is wrong. Further, you may end up with a minority of legit voices against armies of STABs. Sure, echo-chambers may be formed, but with some algorithm tuning some overlapping should be encouraged to perpetuate ongoing reassessment processes.

[–]zyxzevn 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 2 fun -  (1 child)

I think that building a trust network is a continuous process.
But what is trust exactly?

First you need a network for people that are contributing positively to the forum.
This can change when someone's mood changes due to an emotional/traumatic event.
And how do you treat people that are interested, but mainly like to troll?
(Or instead of trolling want to push their weird opinion)

Then there is another kind of trust: Trust that we can communicate.
People can be trusted in one area, but not in another.
Usually because they are badly informed by propaganda. (Like Qanon or CNN)
Which makes communication worthless.
On the other hand they can have different real-world experiences that are different from what I know.

[–]JasonCarswell[S] 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 2 fun -  (0 children)

I think that building a trust network is a continuous process.

Agreed!

But what is trust exactly?

Whatever you want it to be. It can even be meaningless to you. But if, via something like MetaVotes™, you create a data matrix with more qualitative information than just like/dislike then you can curate your own content.

If a hundred STABs are posting shit then you filter out most of them but keep rating their content that you do see in order to keep reassessing all the content. Also, with this filtration system the owner/admin of the decentralized instance can take out the trash as it becomes too much, ideally keeping the good ranked content but banning the bad actors - ideally all transparently if anyone needs to dig up the dirt. The ONLY content IMO that needs to be permanently deleted should be child porn. All words are harmless, but in combination and with intent some exceptions may apply, such as advocating violence which obviously must be hidden AND saved in case it ever needs to go to court - or to justify someone getting banned.

I know all of this requires serious coding, and I really doubt much or any of this could be SaidIt upgrades, which is why in part starting fresh may be better. Perhaps with an extant possibly decentralize forum with a wide support base of coders. The communities can and will come later, but as Elon Musk said something to the effect of, paraphrased: We can't just make a competitive product and expect to go from nothing to thriving - we need to make a vastly superior product to get people to migrate to us.

First you need a network for people that are contributing positively to the forum.

A separate network? Why? Just ignore the asstrolls. They may even inspire better defenses. Take in ALL good ideas regardless of source.

Or do you mean, Level 3 | Distinguished Users | trusted selections by M7 only | Green?

This can change when someone's mood changes due to an emotional/traumatic event.

Sure, some people slip up or slide downward. I remove people from my /s/Friends list and remember them better for it. There's no reason M7 could un-Green someone. They'd have to earn it back doubly, if they even cared. Most worthy people don't desire authority, IMO.

And how do you treat people that are interested, but mainly like to troll?

Let them troll until they prove themselves ban-worthy. TBD by the admin of the decentralized instance.

(Or instead of trolling want to push their weird opinion)

Unless the opinion is illegal, or if an admin is running a tribal site (conservative, liberal, libertarian, religious, etc etc etc) then there's no need to limit, censor, or ban. If people don't like it they can express it in their MetaVotes™. Also, IMO the MetaVotes™ algorithms should be 100% transparent so people don't have issues with how things are run. Further, I would LOVE for users, in preferences, to be able to program their own MetaVote™ "expressions" (a CGI term we used in Maya 3D software for equations that controlled things like rigs, dynamics, animation, shaders, etc - I did much of this animation with nulls, expressions, constraints, and f-curves). Obviously expressions would be expert level, above some advanced options, and much more than just basic subscribing to topics. IMO everything should have 3 or 4 modes, the 4th being plug and play default.

Then there is another kind of trust: Trust that we can communicate. People can be trusted in one area, but not in another. Usually because they are badly informed by propaganda. (Like Qanon or CNN) Which makes communication worthless. On the other hand they can have different real-world experiences that are different from what I know.

I think I understand what you mean here. MetaVote™ would provide much more qualitative data about the content than just like/dislike. And even if people are inauthentic, brainwashed, and/or otherwise limited their opinions will still be measured and the results will be MUCH more revealing. For example: On Amazon you can have a book that gets a bunch of 5 stars and even more 0 stars with virtually nothing between. Either the content is divisive (tribal, controversial, etc.) or there are trolls trying to make it unpopular - or maybe it's just good for some not for others. Knowing this gives us MUCH more information that a Reddit post with +2 (from +200 likes and -198 dislikes).

[–]icebong 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (1 child)

Yeah i like this level system, like in odysee.com, they just implemented this, and its easy to see how big or trust worthy and account is easily.

[–]JasonCarswell[S] 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (0 children)

I'd say great minds, but it's kinda obvious, not very complex, and importantly the hierarchy is very horizontal.

[–]fschmidt 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 2 fun -  (0 children)

We need to liberate the masses from their brainwashing.

No we don't. Just move away from the masses and watch them destroy themselves from a distance.

I will NEVER deny that until M7 starts evolving in a serious way.

Won't happen.

I agree, not to focus on the masses, but they need to be considered.

Yes, a good forum platform should appeal to the masses on their own terms. Just don't expect them to become enlightened.

Pros/cons as to whether segregation would actually improve things.

Just give forum owners whatever tools they want to deal with spam, and leave it up them which tools they use. In other words, this should not be decided at the platform level. Personally I don't need any tools since removing spam from my forums is easy enough.