2024-02-10 Boundaryless Conversations Podcast - #84 Gardening Platforms and the Future of Open Ecosystems With Alex Komoroske
@tags:: #litā/š§podcast/highlights
@links:: alex komoroske, open ecosystems, platforms,
@ref:: #84 Gardening Platforms and the Future of Open Ecosystems With Alex Komoroske
@author:: Boundaryless Conversations Podcast
=this.file.name
Reference
=this.ref
Notes
(highlight:: "North Stars" in Human Systems: Influence and Challenged of Communication Fidelity
Transcript:
Speaker 1
And so if you have one way of looking at this is when you have uncoordinated entities, lots of different agents, lots of different developers or people or coworkers or whatever, by default, It's just Brownian motion. Everybody's just going in a direction and the coherence at the whole, there is no coherence. It's just an overall like entropic kind of expansion. And if you, however, if you can get even the teensiest amount of bias of like people are 10% more likely to go in that direction. Overall, the coherence of the overall behavior of this warm goes up significantly, right? Just a little bit of an edge, a consistent edge and it pulls everybody in that direction. And this is one of the reasons that North stars can be very powerful, clarifying concepts within an organization, within an ecosystem, within a platform, having a very clear, plausible And inspiring North star that other people that everybody knows about and everybody shares. And that can lead to a very coherent outcomes. And creating such a North star is quite difficult. They're often very low resolution. They must be because you can't control the details of it. You can only control the grand sweep of the thing or give the more of the path that it might take on the grand sweep of things. And I think that's very confusing to people because it's, it's very everyone's used to control in the small, as opposed to like, it's kind of like sweeping arc, you know, in the bigger Picture.)
- TimeĀ 0:07:44
-
(highlight:: Coordination Challenges in Complex Systems
Transcript:
Speaker 1
When I did my original sign mold deck, Venkat did a really nice piece that was very flattering about coordination, coordination, headwinds and how they rule everything around us. They're everywhere. People think that I know I use this lens a lot, but like everyone wants in expects hard things to be hard for interesting reasons. And the reality is the vast majority of hard things are hard for totally boring reasons. And it's just the coordination challenge of getting lots of individual agents to like point in roughly the same direction for some period of time is really, really, really, really, Really hard. And it's just the amount of energy in society that goes into this is, you know, huge. And I think to some degree, I mean, technology is like AI and the internet changed this cost to some degree on the margin on a fundamental level. I think this is not even just like a huge, like, man, humanity, kind of silly that we spend a lot of time coordinating. No, I think it's a fundamental thing. I think it's like a, it's a thing that's expected to show up in any complex adaptive system fundamentally and over again, we're, we're increasing the information transfer rate, which Decrease, it allows larger coordination structures to be plausible. But the, I don't think it's, I think it's a fundamental characteristic. Like a fundamental, like first principles, it emerges in any, any plausible universe must have this dynamic kind of thing.)
- TimeĀ 0:19:00
-
(highlight:: Hard Things Are Hard Because of Boring Reasons e.g. Coordinating Lots of People
Transcript:
Speaker 1
People think that I know I use this lens a lot, but like everyone wants in expects hard things to be hard for interesting reasons. And the reality is the vast majority of hard things are hard for totally boring reasons. And it's just the coordination challenge of getting lots of individual agents to like point in roughly the same direction for some period of time is really, really, really, really, Really hard. And it's just the amount of energy in society that goes into this is, you know, huge. And I think to some degree, I mean, technology is like AI and the internet changed this cost to some degree on the margin on a fundamental level. I think this is not even just like a huge, like, man, humanity, kind of silly that we spend a lot of time coordinating. No, I think it's a fundamental thing. I think it's like a, it's a thing that's expected to show up in any complex adaptive system fundamentally and over again, we're, we're increasing the information transfer rate, which Decrease, it allows larger coordination structures to be plausible. But the, I don't think it's, I think it's a fundamental characteristic. Like a fundamental, like first principles, it emerges in any, any plausible universe must have this dynamic kind of thing.)
- TimeĀ 0:19:09
- favorite, coordination, 1todo evernote, achievement, information flow,
(highlight:: The Advantages of Taking a Portfolio, Demand-Reactive Approach to Platform Development
Transcript:
Speaker 1
And so once you get momentum, if you find a set of actions that will get you some momentum, even small amounts, then you will naturally, you might find that people go, Oh, that's kind of Cool. I want to be part of that. And then that gets more momentum and then it builds. And now before you know it, you actually have a quite a large thing. And if you didn't, it's okay. Like this is where I have an essay about the doorbell in the jungle. This notion of people do strategies and they do this big grand thing. And they, they say, well, we shouldn't even do any of this unless this big grand thing is impossible. And they go for the hardest part first. If they first were going to like do this big, bold bet. My God, no, sketch out a thing that is coherent, that people go, I guess, yeah, that could work. And then that, you know, again, a North star that is plausible and inspiring. And this should be like two pages. It should not be long. It should be, ah, here's why this thing could work. And they're like, I guess it. Yeah, sure. Okay. And then take, figure out what are the small amount of actions that you could take right now that would almost but would very likely pay for themselves, which is to say very low cost or Very clear value that's unlocked, like very clear. Like I've got eight developers, like eight customers who are like banging the door asking for exactly this thing and telling me to take their money. Like, if I invest three hours of effort to build that feature, I will get money for it. So like, okay, what's the likelihood I regret doing that work very low? Because worst case scenario, it paid for itself. And then you accumulate. And then what you do is you just look for more signals of demand. And if at any point you stop seeing demand, just pause to stop developing it. That's fine. But when you start seeing demand again, start investing again. And if you have multiple of these eight or so of these in different directions, the likelihood that any one of them is firing in demand at any given time is pretty high. And so this looks again to people like nihilism. They say, oh, you're saying you don't have a strategy. No, I'm saying I have a really good strategy. It is a meta strategy that works quite well and is likely to work no matter what is happening in the rest of this form. Um, but people so often just go, yeah, I'm going to step one, convince all 18 people that this is a good idea. Like that's not going to work.)
- TimeĀ 0:22:03
-
(highlight:: Trust is the Lubricant of Organizations and Ambiguity is the Destroyer
Transcript:
Speaker 1
It's trust, trust, trust makes it all work. And, uh, especially when you've hired very good people, it's going to be like, you have to, when you love it, let it go kind of thing, you know, so you got to give, like people will know and Do things that you don't fully understand. And they won't go to represent to you why they are a good idea. And sometimes it's like literally information they cannot share with you, um, because of the power dynamics and, you know, what have you or it wouldn't make sense, you know, or take, Or take forever to explain. And, um, so it's just a lot of like hired good people have given quite a bit of leeway and help ensure that they can think long term. You can think of trust as the lubricant of, um, of an organization. And it's the thing that allows the organization to navigate ambiguity and they get stronger from it, not weaker. And by default, ambiguity destroys an organization in a low trust environment. And, uh, every little bit of thing is like, Oh, I think that person is just trying to be lazy or like do this thing for their own personal game. Like, right. I mean, like it just you're screwed at that point. Like there's nothing you're now going to send into this finger pointing kind of, um, I think a lens,)
- TimeĀ 0:25:50
- favorite,
(highlight:: 'K-Fabe': When organizations and their employees incentivize holding false beliefs about reality
Transcript:
Speaker 1
My favorite little lens is, and we use this a fair bit of influx is K-Fabe and K-Fabe is this notion from professional baffling. And what it means is a thing that everybody knows is fake, that everybody acts like is real. And I think K-Fabe is a useful lens for organizations, but understanding what actually happens in organizations. And organizations, K-Fabe is also a little bit of a lubricant in organizations in the same way that politeness is actually a lot of politeness is a lot of crushed up little white lies Kind of, but they lubricate interactions in society. So like a little bit of K-Fabe is good. Um, it's, it helps make sure everybody, you know, is like, yeah, we're going to be bold or whatever. What can happen in organizations in general, and it happens, I think in actually the some degree in all organizations over time is the K-Fabe starts off grounded, just a little bit off The ground truth, but then you add more layers and then you add more pressure for big, bold bets. And that means it's harder for people to raise disconfirming evidence up the chain to their manager and say, I don't think that's possible. And so you now have like, they're just playing the game. So now you have something that's decohering at a certain point, the K-Fabe delaminates from underlying reality and becomes free-floating. And this is a very dangerous situation because you now have a thing that looks like it's going in a coherent direction. It is not. The other thing that you is very dangerous about this is you as a participate, participant in an organization have to make a decision. Do I try to point out the ground truth reality or do I play along with the K-Fabe? And everybody has built all their plans on the K-Fabe, not ground reality. And so the more people that have done that, the harder and more expensive it is for you to go for the ground truth. And so when you point out, Hey, actually, this thing down here, a lot of people who built all their plans and all their promo plans and everything are based on these plans that are based On like non reality will be very mad at you. And so it will be very and there's a, you know, maybe quite passive to you. So it was actually a very strong accelerating momentum to just play along with the K-Fabe. And so this is one of the reasons K-Fabe is such a dangerous factor in large organizations. And it happens, I think, in every organization, at some point, to some degree, and it can, it's quite, it can be quite dangerous.)
- TimeĀ 0:26:53
- favorite, group_dynamics, organizational momentum, pluralistic ignorance,
- [note::Reminds me of pluralistic ignorance and the way Philly Truce founders weren't able to see the importance of user feedback]
(highlight:: De-risking Product Launches Based On Early User Feedback
Transcript:
Speaker 1
Like, this is actually another place where when organizations have sort of dominated by K-Fade, everybody wants to polish this big thing that's going to solve all the problems, this New product or whatever, and they sit there and they polish it and they polish it and they polish it. And actually having real users see it might ground truth it and go, I don't want this. And that would be very dangerous to understand politically within the organization. And so people kind of like, oh, we're just going to make sure it's really good. It's like, no, actually with the core concepts might not be useful. It might not be a viable concept in the market. And so that's why I always look for, how can you de-risk it? So the lens that people use often when they have a big, big, big bold launch, people want to, people have this intuition, big, bold launches is the way you ship things. That's how, you know, that this perfect moment and everyone's going to go, whoa, and come into it. That's everyone's going to listen and hear about it. I think this tactic is extraordinarily dangerous and unnecessary in a lot of cases. When you have something that that people who are interested can self select into and accumulate and then has momentum as they accumulate it, you don't need a big bang. In fact, actually a big bang is bad because, you know, maybe a key opinion maker ate a bad burrito that day or something and they're grumpy. And they've read a tweet that says, I hate this thing. For these reasons, like, crap, that now my thing is dead because that got ate a bad burrito that day. So like, I think it's much de-risk the underlying reality. So like, how can you make it so like, this is where the frame goes less from, how can I maximize number of people who hear about this and love it? It's actually, how can I minimize the number of people who use the experience and have such a bad time that they will never use it again? And then secondarily, maximize the absolute number of people who have a good enough or or actively great experience. People focus on the latter part, but they don't focus on the former part. And if you do that, then you start realizing, oh, wait a second, I should do an alpha test or some small thing in a quiet, out of the way corner for a self selecting population of resilient, Engaged, motivated users who it's a small group of people. So if you burn through them and they have a really bad experience, you've only burned through a small portion of them. But more also, they are less likely to get burned because it says, warning, this thing might blow up on your face. They use it anyway, they blow it up in their face and they go, well, they did tell me, it was going to blow up my face so that they're less likely to go have a negative surprise about the underlying Reality, which means that now in the future, when you say, hey, we did made a big update, no longer expose people's faces out as often. They might try it again. So you haven't burned them out.)
- TimeĀ 0:30:36
- favorite,
(highlight:: Resources on Visions for Early Tech
Transcript:
Speaker 1
Go back and read things that were written back in the early 2000s, Yokoi Benkler's Walt of Networks, Starfish and the Spider-eye, I'm blanking on the author's name, some of Clay Sharki's Work. I think it's very prescient, and it's also a trip reading it because it's not how it played out. It played out like that, and then it just looped right into a very top-down aggregator first kind of thing. If you read Tim Wu's Master Switch, the thing that really surprised me about that book was I had this notion before I read it of like, oh, tech is special. This has never happened before. This is a totally different dynamic than it's happened before. And that book is like basically every technical revolution of like radio and TV and movies, they all went through this, whoa, and then into a centralized thing. And so my hope is that we were actually at the era of a new sort of enabling technologies that might allow a new era of tinkering and bottoms up thing. Those might plausibly be some of the innovations in Web3. Potentially, I'm not an expert in that. I think I do buy lower casey crypto is useful. I don't necessarily know if I buy all the other components of the Web3, the broader Web3 vision. And I also think that Genitive AI really does play a role in like the reducing the transaction cost. I think we will see some different equilibrium.)
- TimeĀ 0:35:01
- 1action,
(highlight:: Speed Begets Externalities: Taking Shortcuts Often Produces Debt That Must Be Repaid
Transcript:
Speaker 1
One way, by the way, the way to go fast, when people are like, oh, we should go fast, going fast almost always requires taking shortcuts, and a shortcut takes the form of some kind of externality. So an externality might be to whatever sucker is sitting in this seat in three years. So it might be an externality in time. It might be an externality in other Jason things. And so often when things go fast, what they're doing is pumping externalities out into another part of the system. And you can visualize this as like this mental model of like in organizations. Someone's like, I'm going really, it's wow, it's so foggy in here. And I'm going to create, I built a little machine that's going to pump the fog out of our room so we can see clearly an execute. And what they don't realize is that fog, like that machine is powered by coal. And so actually it's not fog, it's smog. It's everybody pumping that like, the creating clarity in their local pocket and creating lack of clarity around them. And so everyone is fighting and escalating race where everybody is pumping tons of coal. I do it making the overall thing much, much more expensive and challenging. So just remember that I think that when we when you set up to a smaller thing, the externality might come back to bite you. And sometimes you can line up where that doesn't really matter.)
- TimeĀ 0:36:19
-
(highlight:: Using MediaWiki as a System for Facilitating Consensus Between Builders of a Platform
Transcript:
Speaker 1
So if you say, hey, it's basically this thing over here, but like plus is like this little thing over here, like this little extra little, we're going to add a new very like a new property To CSS's display and we're going to add a new version to it. So much simpler than describing, we're going to build this whole new styling system that access via JavaScript. It's like, whoa, whoa, what? So you want to rely on the existing pieces as much as possible where they fit. And that is a natural convergent that leads to a building up of things that are coherent. But you also do want some space to be able to do separately. There's a pattern that I like for this kind of thing, by the way, about what you do is you have in a spec, you have some kind of open ended field that points maybe to a URI of some other thing Other semantic that you're referencing. And then what you do is you buy a domain as a foundation of a number of the major providers or whatever. And you have like, let's imagine this is about credentials. So you say, verify credentials that org, you create this domain, you just end up a media wiki instance on it. And you make sure that it's a small foundation of a number of different companies that just pay the hosting costs. And that's it. It's just kind of like a shared common little thing. And then what you do is you make it, hey, anybody can create a page on this wiki with their username on this wiki prepended. So it can be Alex's dash, this semantic. And then what you do, so anyone can create it. And they can document here is the fields, here's the methods, here's the events, here's what I mean by this. And then what you do is when anybody creates a new one, first they must search for it. So they search for what they want their thing to do. And then you say, oh, is it like one of these five? And then what you do is you show how often is this one referenced, how many stars does this have or whatever, and you sort it based on that. And so what this does is it allows anybody to create a semantic, whatever, fine. But then it also creates a preferential attachment effect of if you if you're kind of a tradesman, you look at you know, oh, this one has 1000 stars and you look at like, Oh, yeah, that's Pretty close to what I meant. And oh, they actually thought about this edge case. I hadn't thought about that. Yeah, I'll use that. So now at a certain point, if one of these pops out and starts being used, oh yeah, that one, that's the tab strip semantic sure, that's the one we all use. At that point, it's a no brainer to like promote it out of the sandbox into the like, you know, pre on prefix version. And then you might even decide to standardize it formally after that, but you might not need to because everyone goes, yeah, yeah, that's true. You know, so in this way, you've discovered the semantics by allowing people to do whatever they want, but giving a little bit a little teensy bit of a of an optional, but default convergence Kind of energy. And this is a very powerful pattern.)
- TimeĀ 0:41:32
- platform development, 1action, wikis, product development, decentralized innovation, standards,
(highlight:: The Moral Pitfalls of Open Systems
Transcript:
Speaker 1
I used to believe on a fundamental basis, a fundamental basis that open systems were morally superior period. And I no longer believe that to be the case. I think that many open systems, they tend it to be more morally superior for society. But like, that is not always the case. And there's a number of places where it allows people to find, I have an essay I published many years ago, it's pretty dark. It's called the runaway engine in society. And it frames society as a evolutionary search through an evolving fitness landscape, using different technologies and different substrates of biological evolution, cultural Evolution, algorithms and search AI. And the conclusion it comes to is as you take away the gatekeepers, as you make it so that anybody can tweak with any of these things, you very quickly fall down whatever the actual incentive Gradient is, which in the case of humanity is whatever terrible heuristics were burned into a firmware of our brains in a high friction evolutionary environment. So like some of the heuristics that were burned in that were made a ton of sense in a like our evolutionary environment, where if you see fat or sugar, just eat as much as you can, you know, And like gossip all the time. And if you see somebody from that you don't recognize, just kill them, just just in case, you know, these terrible these terrible heuristics that are absolutely horrendous in a modern Society that can provide those things. And this is kind of, so open systems tend to allow people to then fall into these traps, where people just do the thing that they don't, that's they want to want, but what they actually Want. Like one of these I want to want is to read interesting pieces by authors I disagree with that will challenge my worldview. That's what I want to want. But reality, I want to read stuff that makes me feel like a smart person for what I already believe, especially when I'm stressed or busy, right? This is true, I think, for the vast majority of people. And this is one of the reasons that like gatekeepers and news, over back in the past was bad in a lot of ways.)
- TimeĀ 0:45:49
-
(highlight:: Decision-Making Heuristic: Decision + Friends & Family = Embarrassment?
Transcript:
Speaker 1
One of my rules of thumb is for every decision I make, even micro decisions, I try to imagine is someone were to show me a video of this decision right now that I'm making in 10 years at a party With all my friends and family. I would, I want to optimize to the very least not be embarrassed of it. And ideally, to be actively proud.)
- TimeĀ 0:48:05
-
(highlight:: Slowing Down Goodhart's Law: Use Multiple Metrics + Recognize the Cleverness of People
Transcript:
Speaker 1
Good Heart's law is this notion that once you start optimizing for a metric, it ceases to measure what you care about. And the intuition for this is the interest of the individual members of the swarm is different than the interest of the collective of the overall total goal. So if you there's different ways to make Good Heart's law, it runs, it's inevitable, but there's different ways to make it run faster or slower. So one of the ways that makes it run faster is if you make it said, all that matters is this metric, nothing else matters. This is the only legible signal in the entire system, then like you're going to get all kinds of fraud. Yes, if there's no real names in the thing, there's no direction connection to real, you're going to get a whole bunch of fraud. And then the second thing that happens is how clever are the participants? If they're really clever, then Good Heart's law will run faster, because they'll get figure out all kinds of weird little loopholes and one weird tricks and whatever. And they will discover an innovation which only innovates the metric, not the underlying reality. Yes, which is its own form of KFET.
Speaker 2
We should be careful with what metrics we said.
Speaker 1
But on a fundamental basis, because no individual metric, you can do things like a portfolio of metrics and check metrics. These help prevent reduce this effect. But fundamentally, if what you care about is long term outcomes, I don't want you to get a short term, I actually want you to care about long term. Okay, but long term takes a long time to happen. So like, you don't know on the way there is this person of a bad actor or not. And so it's actually very fundamentally impossible to steer based on long term metrics. So you must use proxy metrics. But then the proxy metric will become your primary metric, because it's the one that's going to be constantly, how are you rewarded? How are you seeing what your performance is or whatever. So everyone will focus on the proxy metric, and then they'll forget that the proxy metric is a means to an end.)
- TimeĀ 0:51:08
- goodharts_law, metrics, group_performanceeffectiveness,
(highlight:: Goodhart's Law and the Use of Proxy Metrics
Transcript:
Speaker 1
And the intuition for this is the interest of the individual members of the swarm is different than the interest of the collective of the overall total goal. So if you there's different ways to make Good Heart's law, it runs, it's inevitable, but there's different ways to make it run faster or slower. So one of the ways that makes it run faster is if you make it said, all that matters is this metric, nothing else matters. This is the only legible signal in the entire system, then like you're going to get all kinds of fraud. Yes, if there's no real names in the thing, there's no direction connection to real, you're going to get a whole bunch of fraud. And then the second thing that happens is how clever are the participants? If they're really clever, then Good Heart's law will run faster, because they'll get figure out all kinds of weird little loopholes and one weird tricks and whatever. And they will discover an innovation which only innovates the metric, not the underlying reality. Yes, which is its own form of KFET.
Speaker 2
We should be careful with what metrics we said.
Speaker 1
But on a fundamental basis, because no individual metric, you can do things like a portfolio of metrics and check metrics. These help prevent reduce this effect. But fundamentally, if what you care about is long term outcomes, I don't want you to get a short term, I actually want you to care about long term. Okay, but long term takes a long time to happen. So like, you don't know on the way there is this person of a bad actor or not. And so it's actually very fundamentally impossible to steer based on long term metrics. So you must use proxy metrics.)
- TimeĀ 0:51:14
-
(highlight:: Trust is Incentivized When There's The Expectation Future Direct/Indirect Interactions
Transcript:
Speaker 1
I believe that people want to be good, but often they're put in situations where they're incentivized to be bad. And it's very easy to get in those situations where there's an anonymous connection of like, oh, this is just some random, you know, person on the thing. I'm just telling what I feel. It's like, no, you're talking an individual that you think that they are terrible and worthless or doing this miserable thing. Like that's a that the internet has this very, there's something about seeing something someone in person to understand their direct humanity and to feel this level of trust in them. And the internet allows you to talk with anybody. But it means that like you fundamentally, like, here's one way looking at this trust ultimately comes from the expectation of future interactions directly or indirectly with this Counterparty. So like, what I mean by indirect is maybe this person tells their friends, oh my god, don't talk to allies like guys at jerk. So like, that's an indirect interaction to the future. So this is where trust comes from. It's that we had that a future expected repeated interactions. So if somebody if you're in a small town and someone cuts you off at the stop sign and speeds pass to you, you probably won't flick them off because you're likely to see them in the grocery Store, again, or a church or whatever. If someone cooks you off in New York City, you're going to flick them off. You will never see that person again. And so there's something about this open system where you don't know when you're going to see like, Oh, cool. If I have a better action with this community, whatever, I'll just burn that account and go, you know, do something over elsewhere, it's people in this transactional mood or enables People to be in this transactional mood. And that can create all kinds of weird oddities in the way the system underlying works.)
- TimeĀ 0:55:55
-
(highlight:: Hope is not a Strategy: The necessity of planning for undesirable scenarios
Transcript:
Speaker 1
I think recognizing that there's situations in which people will do things that are probably not what people want is not it's like, well, we shouldn't we shouldn't dwell on the negative Or I hope people will be better. It's like, hope is not a strategy man. Like the you got to be realistic about the kinds of things that might happen. And what would we do if that did happen?)
- TimeĀ 0:57:33
- hope, red teaming, pre-mortems, murphyjitsu, realism, pragmatism,
(highlight:: Resource Recommendations From Alex Komoroske
Transcript:
Speaker 1
So I have a lot of my favorite essays and articles on kamaraski.com linked from there. And I think all of them, they kind of talk to each other a bit. They're all interrelated in various ways. But I think that they are, the gardening platforms, like I think is useful, hopefully. I also really like the doorbell and the jungle, by the way. I think it's one of my favorite little things of like relatively concrete product guidance will also be a fanciful metaphor. The other thing I really recommend is one of my good friends blogs. Okay, so first of all, I must recommend Flux Collective, read.fluxcollective.org. I think that's a letter that I had a bunch of good friends and collaborators work on on a weekly basis. Another that I think really deserves even more attention that it gets is Dimitri Glass Cause, what Dimitri learned at substack.com, I think is phenomenal. I think it's Dimitri was the sort of uber-tl of Blink for many years. We were collaborators. We still talk almost every day. Brilliant engineer and architect, but also understands the human component of code and the systems and how they're built. And his essays on that blog, I think, are just exceptionally insightful.)
- TimeĀ 1:01:04
- 1resource,
dg-publish: true
created: 2024-07-01
modified: 2024-07-01
title: #84 Gardening Platforms and the Future of Open Ecosystems With Alex Komoroske
source: snipd
@tags:: #litā/š§podcast/highlights
@links:: alex komoroske, open ecosystems, platforms,
@ref:: #84 Gardening Platforms and the Future of Open Ecosystems With Alex Komoroske
@author:: Boundaryless Conversations Podcast
=this.file.name
Reference
=this.ref
Notes
(highlight:: "North Stars" in Human Systems: Influence and Challenged of Communication Fidelity
Transcript:
Speaker 1
And so if you have one way of looking at this is when you have uncoordinated entities, lots of different agents, lots of different developers or people or coworkers or whatever, by default, It's just Brownian motion. Everybody's just going in a direction and the coherence at the whole, there is no coherence. It's just an overall like entropic kind of expansion. And if you, however, if you can get even the teensiest amount of bias of like people are 10% more likely to go in that direction. Overall, the coherence of the overall behavior of this warm goes up significantly, right? Just a little bit of an edge, a consistent edge and it pulls everybody in that direction. And this is one of the reasons that North stars can be very powerful, clarifying concepts within an organization, within an ecosystem, within a platform, having a very clear, plausible And inspiring North star that other people that everybody knows about and everybody shares. And that can lead to a very coherent outcomes. And creating such a North star is quite difficult. They're often very low resolution. They must be because you can't control the details of it. You can only control the grand sweep of the thing or give the more of the path that it might take on the grand sweep of things. And I think that's very confusing to people because it's, it's very everyone's used to control in the small, as opposed to like, it's kind of like sweeping arc, you know, in the bigger Picture.)
- TimeĀ 0:07:44
-
(highlight:: Coordination Challenges in Complex Systems
Transcript:
Speaker 1
When I did my original sign mold deck, Venkat did a really nice piece that was very flattering about coordination, coordination, headwinds and how they rule everything around us. They're everywhere. People think that I know I use this lens a lot, but like everyone wants in expects hard things to be hard for interesting reasons. And the reality is the vast majority of hard things are hard for totally boring reasons. And it's just the coordination challenge of getting lots of individual agents to like point in roughly the same direction for some period of time is really, really, really, really, Really hard. And it's just the amount of energy in society that goes into this is, you know, huge. And I think to some degree, I mean, technology is like AI and the internet changed this cost to some degree on the margin on a fundamental level. I think this is not even just like a huge, like, man, humanity, kind of silly that we spend a lot of time coordinating. No, I think it's a fundamental thing. I think it's like a, it's a thing that's expected to show up in any complex adaptive system fundamentally and over again, we're, we're increasing the information transfer rate, which Decrease, it allows larger coordination structures to be plausible. But the, I don't think it's, I think it's a fundamental characteristic. Like a fundamental, like first principles, it emerges in any, any plausible universe must have this dynamic kind of thing.)
- TimeĀ 0:19:00
-
(highlight:: Hard Things Are Hard Because of Boring Reasons e.g. Coordinating Lots of People
Transcript:
Speaker 1
People think that I know I use this lens a lot, but like everyone wants in expects hard things to be hard for interesting reasons. And the reality is the vast majority of hard things are hard for totally boring reasons. And it's just the coordination challenge of getting lots of individual agents to like point in roughly the same direction for some period of time is really, really, really, really, Really hard. And it's just the amount of energy in society that goes into this is, you know, huge. And I think to some degree, I mean, technology is like AI and the internet changed this cost to some degree on the margin on a fundamental level. I think this is not even just like a huge, like, man, humanity, kind of silly that we spend a lot of time coordinating. No, I think it's a fundamental thing. I think it's like a, it's a thing that's expected to show up in any complex adaptive system fundamentally and over again, we're, we're increasing the information transfer rate, which Decrease, it allows larger coordination structures to be plausible. But the, I don't think it's, I think it's a fundamental characteristic. Like a fundamental, like first principles, it emerges in any, any plausible universe must have this dynamic kind of thing.)
- TimeĀ 0:19:09
- favorite, coordination, 1todo evernote, achievement, information flow,
(highlight:: The Advantages of Taking a Portfolio, Demand-Reactive Approach to Platform Development
Transcript:
Speaker 1
And so once you get momentum, if you find a set of actions that will get you some momentum, even small amounts, then you will naturally, you might find that people go, Oh, that's kind of Cool. I want to be part of that. And then that gets more momentum and then it builds. And now before you know it, you actually have a quite a large thing. And if you didn't, it's okay. Like this is where I have an essay about the doorbell in the jungle. This notion of people do strategies and they do this big grand thing. And they, they say, well, we shouldn't even do any of this unless this big grand thing is impossible. And they go for the hardest part first. If they first were going to like do this big, bold bet. My God, no, sketch out a thing that is coherent, that people go, I guess, yeah, that could work. And then that, you know, again, a North star that is plausible and inspiring. And this should be like two pages. It should not be long. It should be, ah, here's why this thing could work. And they're like, I guess it. Yeah, sure. Okay. And then take, figure out what are the small amount of actions that you could take right now that would almost but would very likely pay for themselves, which is to say very low cost or Very clear value that's unlocked, like very clear. Like I've got eight developers, like eight customers who are like banging the door asking for exactly this thing and telling me to take their money. Like, if I invest three hours of effort to build that feature, I will get money for it. So like, okay, what's the likelihood I regret doing that work very low? Because worst case scenario, it paid for itself. And then you accumulate. And then what you do is you just look for more signals of demand. And if at any point you stop seeing demand, just pause to stop developing it. That's fine. But when you start seeing demand again, start investing again. And if you have multiple of these eight or so of these in different directions, the likelihood that any one of them is firing in demand at any given time is pretty high. And so this looks again to people like nihilism. They say, oh, you're saying you don't have a strategy. No, I'm saying I have a really good strategy. It is a meta strategy that works quite well and is likely to work no matter what is happening in the rest of this form. Um, but people so often just go, yeah, I'm going to step one, convince all 18 people that this is a good idea. Like that's not going to work.)
- TimeĀ 0:22:03
-
(highlight:: Trust is the Lubricant of Organizations and Ambiguity is the Destroyer
Transcript:
Speaker 1
It's trust, trust, trust makes it all work. And, uh, especially when you've hired very good people, it's going to be like, you have to, when you love it, let it go kind of thing, you know, so you got to give, like people will know and Do things that you don't fully understand. And they won't go to represent to you why they are a good idea. And sometimes it's like literally information they cannot share with you, um, because of the power dynamics and, you know, what have you or it wouldn't make sense, you know, or take, Or take forever to explain. And, um, so it's just a lot of like hired good people have given quite a bit of leeway and help ensure that they can think long term. You can think of trust as the lubricant of, um, of an organization. And it's the thing that allows the organization to navigate ambiguity and they get stronger from it, not weaker. And by default, ambiguity destroys an organization in a low trust environment. And, uh, every little bit of thing is like, Oh, I think that person is just trying to be lazy or like do this thing for their own personal game. Like, right. I mean, like it just you're screwed at that point. Like there's nothing you're now going to send into this finger pointing kind of, um, I think a lens,)
- TimeĀ 0:25:50
- favorite,
(highlight:: 'K-Fabe': When organizations and their employees incentivize holding false beliefs about reality
Transcript:
Speaker 1
My favorite little lens is, and we use this a fair bit of influx is K-Fabe and K-Fabe is this notion from professional baffling. And what it means is a thing that everybody knows is fake, that everybody acts like is real. And I think K-Fabe is a useful lens for organizations, but understanding what actually happens in organizations. And organizations, K-Fabe is also a little bit of a lubricant in organizations in the same way that politeness is actually a lot of politeness is a lot of crushed up little white lies Kind of, but they lubricate interactions in society. So like a little bit of K-Fabe is good. Um, it's, it helps make sure everybody, you know, is like, yeah, we're going to be bold or whatever. What can happen in organizations in general, and it happens, I think in actually the some degree in all organizations over time is the K-Fabe starts off grounded, just a little bit off The ground truth, but then you add more layers and then you add more pressure for big, bold bets. And that means it's harder for people to raise disconfirming evidence up the chain to their manager and say, I don't think that's possible. And so you now have like, they're just playing the game. So now you have something that's decohering at a certain point, the K-Fabe delaminates from underlying reality and becomes free-floating. And this is a very dangerous situation because you now have a thing that looks like it's going in a coherent direction. It is not. The other thing that you is very dangerous about this is you as a participate, participant in an organization have to make a decision. Do I try to point out the ground truth reality or do I play along with the K-Fabe? And everybody has built all their plans on the K-Fabe, not ground reality. And so the more people that have done that, the harder and more expensive it is for you to go for the ground truth. And so when you point out, Hey, actually, this thing down here, a lot of people who built all their plans and all their promo plans and everything are based on these plans that are based On like non reality will be very mad at you. And so it will be very and there's a, you know, maybe quite passive to you. So it was actually a very strong accelerating momentum to just play along with the K-Fabe. And so this is one of the reasons K-Fabe is such a dangerous factor in large organizations. And it happens, I think, in every organization, at some point, to some degree, and it can, it's quite, it can be quite dangerous.)
- TimeĀ 0:26:53
- favorite, group_dynamics, organizational momentum, pluralistic ignorance,
- [note::Reminds me of pluralistic ignorance and the way Philly Truce founders weren't able to see the importance of user feedback]
(highlight:: De-risking Product Launches Based On Early User Feedback
Transcript:
Speaker 1
Like, this is actually another place where when organizations have sort of dominated by K-Fade, everybody wants to polish this big thing that's going to solve all the problems, this New product or whatever, and they sit there and they polish it and they polish it and they polish it. And actually having real users see it might ground truth it and go, I don't want this. And that would be very dangerous to understand politically within the organization. And so people kind of like, oh, we're just going to make sure it's really good. It's like, no, actually with the core concepts might not be useful. It might not be a viable concept in the market. And so that's why I always look for, how can you de-risk it? So the lens that people use often when they have a big, big, big bold launch, people want to, people have this intuition, big, bold launches is the way you ship things. That's how, you know, that this perfect moment and everyone's going to go, whoa, and come into it. That's everyone's going to listen and hear about it. I think this tactic is extraordinarily dangerous and unnecessary in a lot of cases. When you have something that that people who are interested can self select into and accumulate and then has momentum as they accumulate it, you don't need a big bang. In fact, actually a big bang is bad because, you know, maybe a key opinion maker ate a bad burrito that day or something and they're grumpy. And they've read a tweet that says, I hate this thing. For these reasons, like, crap, that now my thing is dead because that got ate a bad burrito that day. So like, I think it's much de-risk the underlying reality. So like, how can you make it so like, this is where the frame goes less from, how can I maximize number of people who hear about this and love it? It's actually, how can I minimize the number of people who use the experience and have such a bad time that they will never use it again? And then secondarily, maximize the absolute number of people who have a good enough or or actively great experience. People focus on the latter part, but they don't focus on the former part. And if you do that, then you start realizing, oh, wait a second, I should do an alpha test or some small thing in a quiet, out of the way corner for a self selecting population of resilient, Engaged, motivated users who it's a small group of people. So if you burn through them and they have a really bad experience, you've only burned through a small portion of them. But more also, they are less likely to get burned because it says, warning, this thing might blow up on your face. They use it anyway, they blow it up in their face and they go, well, they did tell me, it was going to blow up my face so that they're less likely to go have a negative surprise about the underlying Reality, which means that now in the future, when you say, hey, we did made a big update, no longer expose people's faces out as often. They might try it again. So you haven't burned them out.)
- TimeĀ 0:30:36
- favorite,
(highlight:: Resources on Visions for Early Tech
Transcript:
Speaker 1
Go back and read things that were written back in the early 2000s, Yokoi Benkler's Walt of Networks, Starfish and the Spider-eye, I'm blanking on the author's name, some of Clay Sharki's Work. I think it's very prescient, and it's also a trip reading it because it's not how it played out. It played out like that, and then it just looped right into a very top-down aggregator first kind of thing. If you read Tim Wu's Master Switch, the thing that really surprised me about that book was I had this notion before I read it of like, oh, tech is special. This has never happened before. This is a totally different dynamic than it's happened before. And that book is like basically every technical revolution of like radio and TV and movies, they all went through this, whoa, and then into a centralized thing. And so my hope is that we were actually at the era of a new sort of enabling technologies that might allow a new era of tinkering and bottoms up thing. Those might plausibly be some of the innovations in Web3. Potentially, I'm not an expert in that. I think I do buy lower casey crypto is useful. I don't necessarily know if I buy all the other components of the Web3, the broader Web3 vision. And I also think that Genitive AI really does play a role in like the reducing the transaction cost. I think we will see some different equilibrium.)
- TimeĀ 0:35:01
- 1action,
(highlight:: Speed Begets Externalities: Taking Shortcuts Often Produces Debt That Must Be Repaid
Transcript:
Speaker 1
One way, by the way, the way to go fast, when people are like, oh, we should go fast, going fast almost always requires taking shortcuts, and a shortcut takes the form of some kind of externality. So an externality might be to whatever sucker is sitting in this seat in three years. So it might be an externality in time. It might be an externality in other Jason things. And so often when things go fast, what they're doing is pumping externalities out into another part of the system. And you can visualize this as like this mental model of like in organizations. Someone's like, I'm going really, it's wow, it's so foggy in here. And I'm going to create, I built a little machine that's going to pump the fog out of our room so we can see clearly an execute. And what they don't realize is that fog, like that machine is powered by coal. And so actually it's not fog, it's smog. It's everybody pumping that like, the creating clarity in their local pocket and creating lack of clarity around them. And so everyone is fighting and escalating race where everybody is pumping tons of coal. I do it making the overall thing much, much more expensive and challenging. So just remember that I think that when we when you set up to a smaller thing, the externality might come back to bite you. And sometimes you can line up where that doesn't really matter.)
- TimeĀ 0:36:19
-
(highlight:: Using MediaWiki as a System for Facilitating Consensus Between Builders of a Platform
Transcript:
Speaker 1
So if you say, hey, it's basically this thing over here, but like plus is like this little thing over here, like this little extra little, we're going to add a new very like a new property To CSS's display and we're going to add a new version to it. So much simpler than describing, we're going to build this whole new styling system that access via JavaScript. It's like, whoa, whoa, what? So you want to rely on the existing pieces as much as possible where they fit. And that is a natural convergent that leads to a building up of things that are coherent. But you also do want some space to be able to do separately. There's a pattern that I like for this kind of thing, by the way, about what you do is you have in a spec, you have some kind of open ended field that points maybe to a URI of some other thing Other semantic that you're referencing. And then what you do is you buy a domain as a foundation of a number of the major providers or whatever. And you have like, let's imagine this is about credentials. So you say, verify credentials that org, you create this domain, you just end up a media wiki instance on it. And you make sure that it's a small foundation of a number of different companies that just pay the hosting costs. And that's it. It's just kind of like a shared common little thing. And then what you do is you make it, hey, anybody can create a page on this wiki with their username on this wiki prepended. So it can be Alex's dash, this semantic. And then what you do, so anyone can create it. And they can document here is the fields, here's the methods, here's the events, here's what I mean by this. And then what you do is when anybody creates a new one, first they must search for it. So they search for what they want their thing to do. And then you say, oh, is it like one of these five? And then what you do is you show how often is this one referenced, how many stars does this have or whatever, and you sort it based on that. And so what this does is it allows anybody to create a semantic, whatever, fine. But then it also creates a preferential attachment effect of if you if you're kind of a tradesman, you look at you know, oh, this one has 1000 stars and you look at like, Oh, yeah, that's Pretty close to what I meant. And oh, they actually thought about this edge case. I hadn't thought about that. Yeah, I'll use that. So now at a certain point, if one of these pops out and starts being used, oh yeah, that one, that's the tab strip semantic sure, that's the one we all use. At that point, it's a no brainer to like promote it out of the sandbox into the like, you know, pre on prefix version. And then you might even decide to standardize it formally after that, but you might not need to because everyone goes, yeah, yeah, that's true. You know, so in this way, you've discovered the semantics by allowing people to do whatever they want, but giving a little bit a little teensy bit of a of an optional, but default convergence Kind of energy. And this is a very powerful pattern.)
- TimeĀ 0:41:32
- platform development, 1action, wikis, product development, decentralized innovation, standards,
(highlight:: The Moral Pitfalls of Open Systems
Transcript:
Speaker 1
I used to believe on a fundamental basis, a fundamental basis that open systems were morally superior period. And I no longer believe that to be the case. I think that many open systems, they tend it to be more morally superior for society. But like, that is not always the case. And there's a number of places where it allows people to find, I have an essay I published many years ago, it's pretty dark. It's called the runaway engine in society. And it frames society as a evolutionary search through an evolving fitness landscape, using different technologies and different substrates of biological evolution, cultural Evolution, algorithms and search AI. And the conclusion it comes to is as you take away the gatekeepers, as you make it so that anybody can tweak with any of these things, you very quickly fall down whatever the actual incentive Gradient is, which in the case of humanity is whatever terrible heuristics were burned into a firmware of our brains in a high friction evolutionary environment. So like some of the heuristics that were burned in that were made a ton of sense in a like our evolutionary environment, where if you see fat or sugar, just eat as much as you can, you know, And like gossip all the time. And if you see somebody from that you don't recognize, just kill them, just just in case, you know, these terrible these terrible heuristics that are absolutely horrendous in a modern Society that can provide those things. And this is kind of, so open systems tend to allow people to then fall into these traps, where people just do the thing that they don't, that's they want to want, but what they actually Want. Like one of these I want to want is to read interesting pieces by authors I disagree with that will challenge my worldview. That's what I want to want. But reality, I want to read stuff that makes me feel like a smart person for what I already believe, especially when I'm stressed or busy, right? This is true, I think, for the vast majority of people. And this is one of the reasons that like gatekeepers and news, over back in the past was bad in a lot of ways.)
- TimeĀ 0:45:49
-
(highlight:: Decision-Making Heuristic: Decision + Friends & Family = Embarrassment?
Transcript:
Speaker 1
One of my rules of thumb is for every decision I make, even micro decisions, I try to imagine is someone were to show me a video of this decision right now that I'm making in 10 years at a party With all my friends and family. I would, I want to optimize to the very least not be embarrassed of it. And ideally, to be actively proud.)
- TimeĀ 0:48:05
-
(highlight:: Slowing Down Goodhart's Law: Use Multiple Metrics + Recognize the Cleverness of People
Transcript:
Speaker 1
Good Heart's law is this notion that once you start optimizing for a metric, it ceases to measure what you care about. And the intuition for this is the interest of the individual members of the swarm is different than the interest of the collective of the overall total goal. So if you there's different ways to make Good Heart's law, it runs, it's inevitable, but there's different ways to make it run faster or slower. So one of the ways that makes it run faster is if you make it said, all that matters is this metric, nothing else matters. This is the only legible signal in the entire system, then like you're going to get all kinds of fraud. Yes, if there's no real names in the thing, there's no direction connection to real, you're going to get a whole bunch of fraud. And then the second thing that happens is how clever are the participants? If they're really clever, then Good Heart's law will run faster, because they'll get figure out all kinds of weird little loopholes and one weird tricks and whatever. And they will discover an innovation which only innovates the metric, not the underlying reality. Yes, which is its own form of KFET.
Speaker 2
We should be careful with what metrics we said.
Speaker 1
But on a fundamental basis, because no individual metric, you can do things like a portfolio of metrics and check metrics. These help prevent reduce this effect. But fundamentally, if what you care about is long term outcomes, I don't want you to get a short term, I actually want you to care about long term. Okay, but long term takes a long time to happen. So like, you don't know on the way there is this person of a bad actor or not. And so it's actually very fundamentally impossible to steer based on long term metrics. So you must use proxy metrics. But then the proxy metric will become your primary metric, because it's the one that's going to be constantly, how are you rewarded? How are you seeing what your performance is or whatever. So everyone will focus on the proxy metric, and then they'll forget that the proxy metric is a means to an end.)
- TimeĀ 0:51:08
- goodharts_law, metrics, group_performanceeffectiveness,
(highlight:: Goodhart's Law and the Use of Proxy Metrics
Transcript:
Speaker 1
And the intuition for this is the interest of the individual members of the swarm is different than the interest of the collective of the overall total goal. So if you there's different ways to make Good Heart's law, it runs, it's inevitable, but there's different ways to make it run faster or slower. So one of the ways that makes it run faster is if you make it said, all that matters is this metric, nothing else matters. This is the only legible signal in the entire system, then like you're going to get all kinds of fraud. Yes, if there's no real names in the thing, there's no direction connection to real, you're going to get a whole bunch of fraud. And then the second thing that happens is how clever are the participants? If they're really clever, then Good Heart's law will run faster, because they'll get figure out all kinds of weird little loopholes and one weird tricks and whatever. And they will discover an innovation which only innovates the metric, not the underlying reality. Yes, which is its own form of KFET.
Speaker 2
We should be careful with what metrics we said.
Speaker 1
But on a fundamental basis, because no individual metric, you can do things like a portfolio of metrics and check metrics. These help prevent reduce this effect. But fundamentally, if what you care about is long term outcomes, I don't want you to get a short term, I actually want you to care about long term. Okay, but long term takes a long time to happen. So like, you don't know on the way there is this person of a bad actor or not. And so it's actually very fundamentally impossible to steer based on long term metrics. So you must use proxy metrics.)
- TimeĀ 0:51:14
-
(highlight:: Trust is Incentivized When There's The Expectation Future Direct/Indirect Interactions
Transcript:
Speaker 1
I believe that people want to be good, but often they're put in situations where they're incentivized to be bad. And it's very easy to get in those situations where there's an anonymous connection of like, oh, this is just some random, you know, person on the thing. I'm just telling what I feel. It's like, no, you're talking an individual that you think that they are terrible and worthless or doing this miserable thing. Like that's a that the internet has this very, there's something about seeing something someone in person to understand their direct humanity and to feel this level of trust in them. And the internet allows you to talk with anybody. But it means that like you fundamentally, like, here's one way looking at this trust ultimately comes from the expectation of future interactions directly or indirectly with this Counterparty. So like, what I mean by indirect is maybe this person tells their friends, oh my god, don't talk to allies like guys at jerk. So like, that's an indirect interaction to the future. So this is where trust comes from. It's that we had that a future expected repeated interactions. So if somebody if you're in a small town and someone cuts you off at the stop sign and speeds pass to you, you probably won't flick them off because you're likely to see them in the grocery Store, again, or a church or whatever. If someone cooks you off in New York City, you're going to flick them off. You will never see that person again. And so there's something about this open system where you don't know when you're going to see like, Oh, cool. If I have a better action with this community, whatever, I'll just burn that account and go, you know, do something over elsewhere, it's people in this transactional mood or enables People to be in this transactional mood. And that can create all kinds of weird oddities in the way the system underlying works.)
- TimeĀ 0:55:55
-
(highlight:: Hope is not a Strategy: The necessity of planning for undesirable scenarios
Transcript:
Speaker 1
I think recognizing that there's situations in which people will do things that are probably not what people want is not it's like, well, we shouldn't we shouldn't dwell on the negative Or I hope people will be better. It's like, hope is not a strategy man. Like the you got to be realistic about the kinds of things that might happen. And what would we do if that did happen?)
- TimeĀ 0:57:33
- hope, red teaming, pre-mortems, murphyjitsu, realism, pragmatism,
(highlight:: Resource Recommendations From Alex Komoroske
Transcript:
Speaker 1
So I have a lot of my favorite essays and articles on kamaraski.com linked from there. And I think all of them, they kind of talk to each other a bit. They're all interrelated in various ways. But I think that they are, the gardening platforms, like I think is useful, hopefully. I also really like the doorbell and the jungle, by the way. I think it's one of my favorite little things of like relatively concrete product guidance will also be a fanciful metaphor. The other thing I really recommend is one of my good friends blogs. Okay, so first of all, I must recommend Flux Collective, read.fluxcollective.org. I think that's a letter that I had a bunch of good friends and collaborators work on on a weekly basis. Another that I think really deserves even more attention that it gets is Dimitri Glass Cause, what Dimitri learned at substack.com, I think is phenomenal. I think it's Dimitri was the sort of uber-tl of Blink for many years. We were collaborators. We still talk almost every day. Brilliant engineer and architect, but also understands the human component of code and the systems and how they're built. And his essays on that blog, I think, are just exceptionally insightful.)
- TimeĀ 1:01:04
- 1resource,