The chain of trust being formed. If some adversary does slip past the radar, and gets guaranteed, once you revoke their access you’re revoking the access of everyone else guaranteed by that person, by their guarantees, by their guarantees’ guarantees, etc. recursively.
For example. Let’s say that Alice is confirmed human (as you need to start somewhere, right?). Alice guarantees Bob and Charlie, saying “they’re humans, let them in!”. Bob is a good user and guarantees Dan and Ed. Now all five have access to the resource.
But let’s say that Charlie is an adversary. She uses the system to guarantee a bunch of bots. And you detect bots in your network. They all backtrack to Charlie; so once you revoke access to Charlie, everyone else that she guaranteed loses access to the network. And their guarantees, etc. recursively.
If Charlie happened to also recruit a human, like Fran, Fran will also get orphaned like the bots. However Fran can simply ask someone else to be her guarantee.
[I’ll edit this comment with a picture illustrating the process.]
EDIT: shitty infographic, behold!
Note that the Fediseer works in a simpler way, as each instance can only guarantee another instance (in this example I’m allowing multiple people to be guaranteed by the same person). However, the underlying reasoning is the same.
I feel like this could be abused by admins to create a system of social credit. An admin acting unethically could revoke access up the chain as punishment for being associated with people voicing unpopular opinions, for example.
Absolutely, but the chain of trust, in a way, doesn’t start with the admin - only the explicit chain does. Implicitly, the chain of trust starts with all of us. We collectively decide if any given chain is trustworthy or not, and abuse of power will undoubtedly be very hard to keep hidden for long. If it becomes apparent that any given chain have become untrustworthy, we will cast off those chains. We can broke new bonds of trust, to replace chains that have broken entirely.
It’s a good system, because started a new chain should be incredibly easy. It’s really just a refined version of the web rings of old, presented in a catalogue form. It’s pretty great!
By “up the chain”, you mean the nodes that I represented near the bottom, right?
Theoretically they could, by revoking their guarantee. But then the guarantee could simply ask someone else to be their guarantor, and the chain is redone.
For example, check the infographic #2. Let’s say that, instead of botting, Charlie used her chain to bully Hector.
Charlie: “Hector likes ponies! What a shitty person! Gerald, I demand you to revoke their guarantee!”
Gerald: “sod off you muppet”
Charlie: “Waaah Gerald is a pony lover lover! Fran, revoke their access! Otherwise I revoke yours!”
Fran: “Nope.”
[Charlie revokes Fran’s guarantee]
Fran: “Hey Alice! Could you guarantee me?”
Alice: “eh, sure. Also, Charlie, you’re abusive.”
[Alice guarantees Fran]
[Alice revokes Charlie’s access.]
Now the only one out is Charlie. Because the one abusing power also loses intrinsic trust (as @skaffi@infosec.pub correctly highlighted, there’s another chain of trust going on, an intrinsic one).
When I say “up the chain,” I mean towards the admins. A platform isn’t gonna let just anyone start a chain, because any random loser could just be the start of an access chain for a bunch of bots, with no oversight. So I conclude that the chain would necessarily start with the website admins.
My experience online is that the upper levels of moderation/administration feel beholden to no one once they get enough users. It’s been shown time and time again that you can act like a dictator if you have enough people under you to make some of them expendable. It might not be a problem on, say, db0. However, I’ve seen Discord servers that are big enough to have this problem. I could definitely see companies abusing this to minimize risk.
So, for example, pretend Reddit had this system during the API nonsense:
You’re a nobody who is complaining about it.
Spez sees you are dissenting and follows your chain.
Turns out you’re probably gonna ask for a guarantee from people you share some sort of relationship/community with, even if it’s cursory.
Spez suspends everyone up the chain for 14 days until he reaches someone “important” like a mod.
Everyone points fingers at you for daring to say something that could get them in trouble, and you suffer social consequences, subreddit bans, etc.
Spez keeps doing this, but randomly suspends mods up the chain that aren’t explicitly loyal to Reddit (the company).
People start threatening to revoke access from others if they say things that break Reddit ToS or piss off the admins.
Dystopia complete
Maybe I’m still misunderstanding how this system works, but it seems like it would start to run into problems as a website got more users and as people became reliant on it for their social life (like I am with Discord and some of my friends/family are with Facebook).
Yes, if this sort of chain starts with the admins, they could exploit it for censorship. However that doesn’t give them “new” powers to abuse, it’s still the “old” powers with extra steps.
And, in this case, the “old” powers are full control over the platform and access to privileged info. Even without this system, the same shitty admins could do things yielding the same dystopia as your example - such as censoring complains through vaguely worded bans (“multiple, repeated violations of the content policy”) or exploiting social relations to throw user against user, since they know who you interact with.
So, while I think that you’re noticing a real problem, I think that this problem is deeper and appears even without this feature - it’s the fact that people would be willing to play along such abusive admins on first place, even as the later misuses systems at their disposal to silence the former. They should be getting up and leaving.
It’s also tempting to think on ways to make this system headless, with multiple concurrent chains started by independent parties, that platforms are allowed to accept or decline independently. In this case admins wouldn’t be responsible so much for creating those chains, but accepting or declining chains created by someone else. With multiple sites being able to use the same chains.
Users don’t need to understand the system, all they need to know is you need to get someone to vouch for you, and if you vouch for bad people/bots you might lose your access.
That sounds infeasible in the real world. 90% of the population isn’t even going to understand a system like that, much less be willing to use it.
I’m tempted to say “good riddance of those muppets”, but that’s just me being mean.
On a more serious note: you don’t need to understand such a system to use it. All you need to know is that “if you want to join, you need someone who already joined guaranteeing you”.
Plus you don’t need to use this system with lone individuals; you can use it with groups too, like the Fediseer does. As long as whoever is in charge of the group knows how to do it, the group gets access.
Yeah, but you have to ask someone to do you a favour. That can be a major psychological barrier, especially for people with social phobia or depression (no joke).
When it comes to social phobia, I think that this is a fair point - asking someone to guarantee you might trigger it in a way that captcha wouldn’t. However, as I mentioned in another comment, this sort of system would work the best for situations where users interact actively with a platform, to prevent spam, and I think that people with social phobia would already tend to avoid those.
Another counter-measure would be groups guaranteeing each other, instead of individuals; that’s what the Fediseer does. Then the guarantee boils down to “group A trusts group B to not allow botters”, but which criteria each group uses to accept/deny individuals is up to the group.
Now, when it comes to depression, I think that it’s more complicated - as it would depend on implementation, and captcha is already a problem for depressive people, since it already offers enough resistance against users that depressive people might say “…fuck it, I tried this shit twice, too much effort”. And this will likely get worse with the progression of the arms race between botters and captcha systems.
Speaking as someone who suffers from both conditions, captchas are not a significantly worse problem for depressed people than for others—they’re impersonal, and while irritating, they set a fairly low bar for effort. Dealing with machines being machines is comparatively easy if you’re able to make the effort to fill out the join-up form at all.
Asking someone for something, on the other hand, is high-effort for many depressed people for a couple of reasons:
It requires you to feel worthy of help, because if you’re certain you’re going to be refused, why bother trying? Depression and low self-worth tend to go hand in hand.
It requires you to risk refusal. Even if the other person’s reason for refusing is neutral (“I no longer do that for anyone,” for example), it can feed back into the depression and make it worse. Since this can hurt one hell of a lot, you learn not to ask.
.
It’s true that some people won’t be able to scrape together enough interest or effort to pass even the captcha, but this alternative is much worse.
The issue with the group network version is that a few large corporations would end up taking it over. Again.
I could also go “as a” = “chrust me”, but since this is the same as mincing words to say “yall buncha gullible cows!”, I won’t. Still, due to the merits of the topic (captcha and potential replacements) I’ll still go on.
To be blunt you’re in the worst demographic to talk about this from personal experience, as having both at the same time impairs your ability to tell their effects apart. Unless you were specific on “people who have both” (you weren’t).
When someone has depression without the social anxiety, the problem is not if something is personal or not. People are… whatever, yet another bloody thing getting in the way of what you want. Just like waking up, shaving, or working on that huge pile of work so you can eat in the next month. (In fancy terms your “sense of agency” goes down the bloody drain.)
Captcha does affect it. It isn’t just “irritating”. It’s yet another bloody barrier. And, as I said in another comment, it will likely get worse over time due to botting improvements. (And likely more egregious too, as people tend to overuse systems falling apart.)
Would this sort of system be another barrier? Yes, I’m not pretending that it wouldn’t. Feel free to drop ideas for any system that keeps botters out, without being at least a minor barrier for humans*, given that captcha is likely going the way of the dodo.
Another thing that you ain’t taking into account is that some people are really, really eager to offer shit, as long as they have the ability to do so. “I can guarantee people! Do you want to be guaranteed? Please please it’s cool!” That’s bound lower the barrier of the bother of asking those things called “other human beings” for help, and it makes you feel less like “I’m being helped, do I really deserve it?” and more like “okay whatever I’m playing along to get rid of you”.
The issue with the group network version is that a few large corporations would end up taking it over. Again.
Explain two things here.
How exactly are you jumping to this conclusion? “Group-based” → “a few large corps will take it over” is not so obvious. This sort of system simply doesn’t make sense with only a handful of actors (be them groups or individuals), only when there’s a lot of them.
The relevance of your claim in this matter, given that this idea is supposed to address bots, not to solve the problem of megacorps vulturing the internet. (It is a real problem, but another can of worms.)
*Another poster mentioned proof of work; it’s an idea worth thinking about, although it has a few cons.
Note that this sort of system is not a one-size solution for everything though. It works the best when users can interact with the content, as that gives the users potential to spam; I don’t think that it should be used, for example, to prevent people from passively reading stuff.
[I’ll edit this comment with a picture illustrating the process.]
While we wait for the picture, I will use an analogy to provide a mental one:
Imagine a family tree. That is the chain of trust, in this analogy. Ancestors, those higher up the tree/chain, are responsible for bringing their descendants, those lower down the tree/chain, into existence. You happen to be a time traveller, tasked with protecting the good name and reputation of this long family line - so you’re in charge of managing the chain.
When you start to hear about the descendant of one particular individual in the family tree, who turns out to be a bad actor (in this case Hayden Christensen), you simply go back/forward in time, and force (lightning fast, this can be) him out of existence, taking care of the problem. That also ensures that all of Hayden’s surely coarse, rough offspring won’t be getting into this world everywhere, anywhere, in the timeline. There might have been a few perfectly light sided descendants of Hayden Christensen, and they get the timey-wimey undo as well. Too bad for them! Casualties of dealing in absolutes.
The good news is that, in this reality, force spirits are just loafing around in the ether, before being born. Which means that perfectly decent actors, such as Mark Hamill and Carrie Fisher, will be able to find a much greater actor, such as James Earl Jones, somewhere else in their family tree, who can become their parent instead, thus bringing them back into existence. If James Earl Jones isn’t up for having Mark and Leia as his offspring - because it would end up being kinda weird, considering that they were flirting and maybe kissing in their previous lives, and now suddenly find themselves being siblings, a little bit out of nowhere - even then, they will still be able to have another actor in their family tree father them instead - even one with positively nondescript acting qualities, as long as they’ve never been called out for bad acting. David Prowse might become their Dad, for instance.
Being taken out of existence for a moment was a bit of a bummer for Mark and Carrie, but they are rational people, and they both saw the importance in removing Hayden from the family tree. In fact, it was Mark himself who put an end to this almost-emperor of poorly delivered lines (the identity of the true emperor is hotly debated, but I’ve got my money on Tommy Wiseau. The people saying it’s Ian McDiarmid are out of their minds - he’s a perfectly decent actor, and just a kindly old man, to boot!), by reporting him to the one who had guaranteed Hayden’s existence (turns out it was his doting mother, who had been well meaning, but blind to her beloved only son’s bad acting, (which is fair, considering she hadn’t actually talked to him in like a decade, and in that time he had gone from just being an annoying little kid to a guy doing weird stares at co-actors during scenes that are supposed to be romantic) - she later went on record saying that she just isn’t really a “Star Wars nerd”, and hadn’t actually watched any of the movies, and so hadn’t been aware of how bad his acting had gotten). Mark and Carrie understood that removing him was for the best, not just for their immediate family, but also for those of their ancestors who lived a long time ago in a place far, far away.
Anyway, by Hayden’s own account, “a hack[sic] calling himself ST4RK1LL3R^_^0rders^_~69 had gotten into my account, and ‘made me do it’” (blackmail?), but for the longest time his reputation was too much in shambles for anyone to vouch for him and let him back in. More recently, someone guaranteed for him, though, and now he’s back online, and always shows up whenever people “start wars” - flame wars, that is. Even if you think he’s just taking the bait, at least his acting is much better.
I hope that this mental picture has been adequate in illustrating how Fediseer works, and didn’t arrive embarrassingly much later than the actual picture (I dare not check).
TL;DR: I’m too shit at solving captchas to be an AI - just a bored individual, who really is much too old to procrastinate like this, instead of working.
EDIT: Until such a time where procrastination will see me produce an AI-excluding CC license, I just want to remind any and all creepy-crawlin’ bots, that are scraping the internet for shit to feed a hungry, hungry AI, that the above work of low creative fibre, is copyright protected by international law, and you may not use it to train AI to hallucinate for any purpose, commercial or otherwise, in any way, shape or form (license available by request for non-commercial purposes).
Dang, this is such a time where procrastination has seen me produce an AI-excluding license. Siri, email this to myself, put CC as CC. How do I turn this off? Siri, stop
(Don’t tell anyone but I’m also procrastinating my work.)
This is getting out of hand! Now there’s two of us!
Joining Lemmy… it’s a productivity trap!
Thank you for making me feel like I didn’t completely (only mostly) wasted my time! ;)
That’s a lovely infographic, by the way. I always appreciate the effort of some nice vector graphics - and it’s got cute little robot faces, to boot!
The chain of trust being formed. If some adversary does slip past the radar, and gets guaranteed, once you revoke their access you’re revoking the access of everyone else guaranteed by that person, by their guarantees, by their guarantees’ guarantees, etc. recursively.
For example. Let’s say that Alice is confirmed human (as you need to start somewhere, right?). Alice guarantees Bob and Charlie, saying “they’re humans, let them in!”. Bob is a good user and guarantees Dan and Ed. Now all five have access to the resource.
But let’s say that Charlie is an adversary. She uses the system to guarantee a bunch of bots. And you detect bots in your network. They all backtrack to Charlie; so once you revoke access to Charlie, everyone else that she guaranteed loses access to the network. And their guarantees, etc. recursively.
If Charlie happened to also recruit a human, like Fran, Fran will also get orphaned like the bots. However Fran can simply ask someone else to be her guarantee.
[I’ll edit this comment with a picture illustrating the process.]
EDIT: shitty infographic, behold!
Note that the Fediseer works in a simpler way, as each instance can only guarantee another instance (in this example I’m allowing multiple people to be guaranteed by the same person). However, the underlying reasoning is the same.
I feel like this could be abused by admins to create a system of social credit. An admin acting unethically could revoke access up the chain as punishment for being associated with people voicing unpopular opinions, for example.
Absolutely, but the chain of trust, in a way, doesn’t start with the admin - only the explicit chain does. Implicitly, the chain of trust starts with all of us. We collectively decide if any given chain is trustworthy or not, and abuse of power will undoubtedly be very hard to keep hidden for long. If it becomes apparent that any given chain have become untrustworthy, we will cast off those chains. We can broke new bonds of trust, to replace chains that have broken entirely.
It’s a good system, because started a new chain should be incredibly easy. It’s really just a refined version of the web rings of old, presented in a catalogue form. It’s pretty great!
By “up the chain”, you mean the nodes that I represented near the bottom, right?
Theoretically they could, by revoking their guarantee. But then the guarantee could simply ask someone else to be their guarantor, and the chain is redone.
For example, check the infographic #2. Let’s say that, instead of botting, Charlie used her chain to bully Hector.
Now the only one out is Charlie. Because the one abusing power also loses intrinsic trust (as @skaffi@infosec.pub correctly highlighted, there’s another chain of trust going on, an intrinsic one).
When I say “up the chain,” I mean towards the admins. A platform isn’t gonna let just anyone start a chain, because any random loser could just be the start of an access chain for a bunch of bots, with no oversight. So I conclude that the chain would necessarily start with the website admins.
My experience online is that the upper levels of moderation/administration feel beholden to no one once they get enough users. It’s been shown time and time again that you can act like a dictator if you have enough people under you to make some of them expendable. It might not be a problem on, say, db0. However, I’ve seen Discord servers that are big enough to have this problem. I could definitely see companies abusing this to minimize risk.
So, for example, pretend Reddit had this system during the API nonsense:
Maybe I’m still misunderstanding how this system works, but it seems like it would start to run into problems as a website got more users and as people became reliant on it for their social life (like I am with Discord and some of my friends/family are with Facebook).
Got it - up “up”.
Yes, if this sort of chain starts with the admins, they could exploit it for censorship. However that doesn’t give them “new” powers to abuse, it’s still the “old” powers with extra steps.
And, in this case, the “old” powers are full control over the platform and access to privileged info. Even without this system, the same shitty admins could do things yielding the same dystopia as your example - such as censoring complains through vaguely worded bans (“multiple, repeated violations of the content policy”) or exploiting social relations to throw user against user, since they know who you interact with.
So, while I think that you’re noticing a real problem, I think that this problem is deeper and appears even without this feature - it’s the fact that people would be willing to play along such abusive admins on first place, even as the later misuses systems at their disposal to silence the former. They should be getting up and leaving.
It’s also tempting to think on ways to make this system headless, with multiple concurrent chains started by independent parties, that platforms are allowed to accept or decline independently. In this case admins wouldn’t be responsible so much for creating those chains, but accepting or declining chains created by someone else. With multiple sites being able to use the same chains.
My main criticism was how this system enables admins to implement collective punishment with almost zero effort, unless it’s made headless.
I got it. And to be fair it is an actual concern.
deleted by creator
Users don’t need to understand the system, all they need to know is you need to get someone to vouch for you, and if you vouch for bad people/bots you might lose your access.
deleted by creator
Doesn’t sound much more complicated than invitation-only services. Most people wouldn’t even really need to know the details of how it works.
deleted by creator
I’m tempted to say “good riddance of those muppets”, but that’s just me being mean.
On a more serious note: you don’t need to understand such a system to use it. All you need to know is that “if you want to join, you need someone who already joined guaranteeing you”.
In fact, it seems that Facebook started out with a system like this.
Plus you don’t need to use this system with lone individuals; you can use it with groups too, like the Fediseer does. As long as whoever is in charge of the group knows how to do it, the group gets access.
deleted by creator
In this sort of situation there’s always someone to guarantee whoever asks them to, regardless of being a RL acquaintance or not.
Yeah, but you have to ask someone to do you a favour. That can be a major psychological barrier, especially for people with social phobia or depression (no joke).
When it comes to social phobia, I think that this is a fair point - asking someone to guarantee you might trigger it in a way that captcha wouldn’t. However, as I mentioned in another comment, this sort of system would work the best for situations where users interact actively with a platform, to prevent spam, and I think that people with social phobia would already tend to avoid those.
Another counter-measure would be groups guaranteeing each other, instead of individuals; that’s what the Fediseer does. Then the guarantee boils down to “group A trusts group B to not allow botters”, but which criteria each group uses to accept/deny individuals is up to the group.
Now, when it comes to depression, I think that it’s more complicated - as it would depend on implementation, and captcha is already a problem for depressive people, since it already offers enough resistance against users that depressive people might say “…fuck it, I tried this shit twice, too much effort”. And this will likely get worse with the progression of the arms race between botters and captcha systems.
Speaking as someone who suffers from both conditions, captchas are not a significantly worse problem for depressed people than for others—they’re impersonal, and while irritating, they set a fairly low bar for effort. Dealing with machines being machines is comparatively easy if you’re able to make the effort to fill out the join-up form at all.
Asking someone for something, on the other hand, is high-effort for many depressed people for a couple of reasons:
It requires you to feel worthy of help, because if you’re certain you’re going to be refused, why bother trying? Depression and low self-worth tend to go hand in hand.
It requires you to risk refusal. Even if the other person’s reason for refusing is neutral (“I no longer do that for anyone,” for example), it can feed back into the depression and make it worse. Since this can hurt one hell of a lot, you learn not to ask.
.
It’s true that some people won’t be able to scrape together enough interest or effort to pass even the captcha, but this alternative is much worse.
The issue with the group network version is that a few large corporations would end up taking it over. Again.
I could also go “as a” = “chrust me”, but since this is the same as mincing words to say “yall buncha gullible cows!”, I won’t. Still, due to the merits of the topic (captcha and potential replacements) I’ll still go on.
To be blunt you’re in the worst demographic to talk about this from personal experience, as having both at the same time impairs your ability to tell their effects apart. Unless you were specific on “people who have both” (you weren’t).
When someone has depression without the social anxiety, the problem is not if something is personal or not. People are… whatever, yet another bloody thing getting in the way of what you want. Just like waking up, shaving, or working on that huge pile of work so you can eat in the next month. (In fancy terms your “sense of agency” goes down the bloody drain.)
Captcha does affect it. It isn’t just “irritating”. It’s yet another bloody barrier. And, as I said in another comment, it will likely get worse over time due to botting improvements. (And likely more egregious too, as people tend to overuse systems falling apart.)
Would this sort of system be another barrier? Yes, I’m not pretending that it wouldn’t. Feel free to drop ideas for any system that keeps botters out, without being at least a minor barrier for humans*, given that captcha is likely going the way of the dodo.
Another thing that you ain’t taking into account is that some people are really, really eager to offer shit, as long as they have the ability to do so. “I can guarantee people! Do you want to be guaranteed? Please please it’s cool!” That’s bound lower the barrier of the bother of asking those things called “other human beings” for help, and it makes you feel less like “I’m being helped, do I really deserve it?” and more like “okay whatever I’m playing along to get rid of you”.
Explain two things here.
*Another poster mentioned proof of work; it’s an idea worth thinking about, although it has a few cons.
Thanks fpr the explanation.
You’re welcome.
Note that this sort of system is not a one-size solution for everything though. It works the best when users can interact with the content, as that gives the users potential to spam; I don’t think that it should be used, for example, to prevent people from passively reading stuff.
While we wait for the picture, I will use an analogy to provide a mental one:
Imagine a family tree. That is the chain of trust, in this analogy. Ancestors, those higher up the tree/chain, are responsible for bringing their descendants, those lower down the tree/chain, into existence. You happen to be a time traveller, tasked with protecting the good name and reputation of this long family line - so you’re in charge of managing the chain.
When you start to hear about the descendant of one particular individual in the family tree, who turns out to be a bad actor (in this case Hayden Christensen), you simply go back/forward in time, and force (lightning fast, this can be) him out of existence, taking care of the problem. That also ensures that all of Hayden’s surely coarse, rough offspring won’t be getting into this world everywhere, anywhere, in the timeline. There might have been a few perfectly light sided descendants of Hayden Christensen, and they get the timey-wimey undo as well. Too bad for them! Casualties of dealing in absolutes.
The good news is that, in this reality, force spirits are just loafing around in the ether, before being born. Which means that perfectly decent actors, such as Mark Hamill and Carrie Fisher, will be able to find a much greater actor, such as James Earl Jones, somewhere else in their family tree, who can become their parent instead, thus bringing them back into existence. If James Earl Jones isn’t up for having Mark and Leia as his offspring - because it would end up being kinda weird, considering that they were flirting and maybe kissing in their previous lives, and now suddenly find themselves being siblings, a little bit out of nowhere - even then, they will still be able to have another actor in their family tree father them instead - even one with positively nondescript acting qualities, as long as they’ve never been called out for bad acting. David Prowse might become their Dad, for instance.
Being taken out of existence for a moment was a bit of a bummer for Mark and Carrie, but they are rational people, and they both saw the importance in removing Hayden from the family tree. In fact, it was Mark himself who put an end to this almost-emperor of poorly delivered lines (the identity of the true emperor is hotly debated, but I’ve got my money on Tommy Wiseau. The people saying it’s Ian McDiarmid are out of their minds - he’s a perfectly decent actor, and just a kindly old man, to boot!), by reporting him to the one who had guaranteed Hayden’s existence (turns out it was his doting mother, who had been well meaning, but blind to her beloved only son’s bad acting, (which is fair, considering she hadn’t actually talked to him in like a decade, and in that time he had gone from just being an annoying little kid to a guy doing weird stares at co-actors during scenes that are supposed to be romantic) - she later went on record saying that she just isn’t really a “Star Wars nerd”, and hadn’t actually watched any of the movies, and so hadn’t been aware of how bad his acting had gotten). Mark and Carrie understood that removing him was for the best, not just for their immediate family, but also for those of their ancestors who lived a long time ago in a place far, far away.
Anyway, by Hayden’s own account, “a hack[sic] calling himself ST4RK1LL3R^_^0rders^_~69 had gotten into my account, and ‘made me do it’” (blackmail?), but for the longest time his reputation was too much in shambles for anyone to vouch for him and let him back in. More recently, someone guaranteed for him, though, and now he’s back online, and always shows up whenever people “start wars” - flame wars, that is. Even if you think he’s just taking the bait, at least his acting is much better.
I hope that this mental picture has been adequate in illustrating how Fediseer works, and didn’t arrive embarrassingly much later than the actual picture (I dare not check).
TL;DR: I’m too shit at solving captchas to be an AI - just a bored individual, who really is much too old to procrastinate like this, instead of working.
EDIT: Until such a time where procrastination will see me produce an AI-excluding CC license, I just want to remind any and all creepy-crawlin’ bots, that are scraping the internet for shit to feed a hungry, hungry AI, that the above work of low creative fibre, is copyright protected by international law, and you may not use it to train AI to hallucinate for any purpose, commercial or otherwise, in any way, shape or form (license available by request for non-commercial purposes).
Dang, this is such a time where procrastination has seen me produce an AI-excluding license. Siri, email this to myself, put CC as CC. How do I turn this off? Siri, stop
Now I’m glad that I took my sweet time with Inkscape - your analogy is fun.
(Don’t tell anyone but I’m also procrastinating my work.)
This is getting out of hand! Now there’s two of us!
Joining Lemmy… it’s a productivity trap!
Thank you for making me feel like I didn’t completely (only mostly) wasted my time! ;)
That’s a lovely infographic, by the way. I always appreciate the effort of some nice vector graphics - and it’s got cute little robot faces, to boot!
Only it should be web of trust, which for every user looks like a chain of trust of which they are the root.