i think that large language models like chatgpt are effectively a neat trick weā€™ve taught computers to do that just so happen to be *really* helpful as a replacement for search engines; instead of indexing sources with the knowledge youā€™re interested in finding, it just indexes the knowledge itself. i think that there are a lot of conversations around how we can make information more ā€œaccessibleā€ (both in terms of accessing paywalled knowledge and that knowledgeā€™s presentation being intentionally obtuse and only easily parseable by other academics), but there are very little actual conversations about how llms could be implemented to easily address both kinds of accessibility. because there isnā€™t a profit incentive to do so. llms (and before them, blockchains - but thatā€™s a separate convo) are just tools; but in the current economic landscape a tool isnā€™t useful if it canā€™t make money, so thereā€™s this inverse law of the instrument happening where the owning classā€™s insistence that we only have nails in turn means we only build hammers. any new, hot, technological framework has to either slash costs for businesses by replacing human labor (like automating who sees what ads when and where), or drive a massive consumer adoption craze (like buying crypto or an oculus or an iphone.) with llms, itā€™s an arms race to build tools for businesses to reduce headcount by training base models on hyperspecific knowledge. it also excuses the ethical transgression of training these models on stolen knowledge / stolen art, because when has ethics ever stood in the way of making money? the other big piece is tech literacy; thereā€™s an incentive for founders and vcs to obscure (or just lie) about what a technology is actually capable of to increase the value of the product. the metaverse could ā€œsupplant the physical world.ā€ crypto could ā€œsupplant our economic systems.ā€ now llms are going to ā€œsupplant human labor and intelligence.ā€ these are enticing stories for the owning class, because it gives them a New Thing that will enable them to own even more. but none of this tech can actually do that shit, which is why the booms around them bust in 6-18 months like clockwork. llms are a perfect implementation of [searleā€™s chinese room](https://plato.stanford.edu/entries/chinese-room/) but sam altman et al *insist* that artificial general intelligence is possible and the upper crust of silicon valley are doing moral panic at each other about how ā€œaiā€ is either paramount to or catastrophic for human flourishing, *when all it can do is echo back the information that humans have already amassed over the course of the last ~600 years.* but most people (including the people funding the technology and ceo types attempting to adopt it en masse) donā€™t know how it works under the hood, so itā€™s easy to pilot the ship in whatever direction fulfills a profit incentive because we canā€™t meaningfully imagine how to use something we donā€™t effectively understand.
recommendation image
Mar 24, 2024

Comments (2)

Make an account to reply.
image
first of all, i love all the points your brought up! i feel that these are the ideas rarely discussed regarding AI, at least in the conversations iā€™ve been having. my main point of this post was the bigger idea that AI can potentially highlight the flaws of a capitalist society and move us away from it. but like you were getting at, it could inversely only reflect how entangled we are within it and limit the capabilities of the technology. when imagining such easily accessible information, it makes me hopeful for the future. if it wonā€™t occur naturally within capitalism, we can only hope some sort of gap in the market would bring about something like this, without the exploitation of any individual or their work, but thinking of it as such only makes me realize how unlikely it is. thank you for your input :)
Mar 28, 2024
image
Well said!
Mar 24, 2024

Related Recs

šŸ¤–
Apologies if this is strongly worded, but I'm pretty passionate about this. In addition to the functions public-facing AI tools have, we have to consider what the goal of AI is for corporations. This is an old clichƩ, but it's a useful one: follow the money. When we see some of the biggest tech companies in the world going all-in on this stuff, alarm bells should be going off. We're seeing a complete buy in by Google, Microsoft, Adobe, and even Meta suddenly pivoted to AI and seems to be quietly abandoning their beloved Metaverse. For decades, the goal of all these companies has always been infinite growth, taking a bigger share of the market, and making a bigger profit. When these are the main motivators, the workforce that carries out the labor supporting an industry is what inevitably suffers. People are told to do more with less, and cuts are made where C-suite executives see fit at the detriment of everyone down the hierarchy. Where AI is unique to other tangible products is that it is an efficiency beast in so many different ways. I have personally seen it affect my job as part of a larger cost-cutting measure. Microsoft's latest IT solutions are designed to automate as much as possible in favor of having actual people carry out typically client-facing tasks. Copy writers/editors inevitably won't be hired if people could instead type a prompt into ChatGPT to spit out a product description. Already, there are so many publications and Substacks that use AI image generators to create attention-grabbing header and link images - before this, an artist could have been paid to create something that might afford them food for the week. All this is to say that we will see a widening discrepancy between the ultra-wealthy and the working class, and the socio-economic structure we're in actively encourages consolidation of power. There are other moral implications with it that I could go on about, but they're kind of subjective. In relation to art, dedicating oneself to a craft often lends itself to fostering a community for support in one's journey, and if we collectively lean on AI more instead of other people, we risk isolating ourselves further in an environment that is already designed to do that. In my opinion, we shouldn't try to co-exist with something that is made to make our physical and emotional work obsolete.
Mar 24, 2024
šŸ€
But really, technology is rarely the issue in and of itself, the issue is the system/motivations/logic/power dynamics it operates within. So AI won't save us, no more than the steam engine, electricity, computers, etc. saved us. Not because it can't, but because it won't be allowed to. Sure, as whole, we might benefit from these technologies, but ultimately one group of people is gonna benefit the most, while another group is going to become all the more exploited (a dynamic that is ultimately unsustainable). Any increase in productivity (and therefore value) that technology brings about (especially since the 70s) isn't distributed to labor, but rather used as an excuse to drive down the value of labor and increase the surplus value or profits of capital. Therefore, AI, which could reduce the amount of labor humans have to do (a good thing), is instead (due to the logic of capitalism) used as a way to eliminate jobs, drive down the cost of products, and discipline labor by casting people into precarity (a bad thing). So I guess AI may destroy us, but it's not AI's fault. Capitalism's inherent logic is to blame. But also, I think the abilities of AI are being blown way out of proportion and simply used as the latest bit of speculative fodder to fuel market growth.
Feb 12, 2024
recommendation image
šŸ˜ƒ
AI is garbage for so many reasons and none of us should be using it. that includes making art as well as seemingly mundane corporate tasks. itā€™s atrocious for the planet, itā€™s horrific for the uncredited workers who labor to power it, it bulldozes all notions of ā€œprivacyā€ and further fast-tracks the commodification of humanity, and itā€˜s fueling the dumpster fire that is an entire generation of brains raised in the cesspool of the internet.
Jan 13, 2025

Top Recs from @alaiyo

recommendation image
šŸ«“
when i tell you the first sixty seconds of this video changed my life i need you to believe me. 10/10 strongly recommend especially amidst boycotting for palestine
Mar 21, 2024
recommendation image
šŸ¦„
a treatise on the attention economy - checked it out on libby and got through it over the course of a work day, a lot of really interesting social and cultural explorations about how time itself is the final frontier of hypercapitalism and what decommodification of our attention and time should look like the book starts with a story about the oldest redwood tree in oakland and how the only reason itā€™s still standing is bc itā€™s unmillable, and how being uncommercializable is essential to our survival. it ends with an exploration of alt social media platforms (mostly p2p ones) and what keeping the good parts of the social internet and rejecting the bad ones should look like all in all a super valuable read; my only nitpick with the book is that odell isnā€™t just charting the attention economy but also attempting to ā€œsolveā€ it and relate it back to broader concepts about labor and social organizing, but her background is in the arts which leads to some really wonderful references to drive the points home while also missing some critical racial + socioeconomic analyses that one would expect (or at least really appreciate) from the book she promises to deliver in the introduction. but this does also make the book easier to read which is good because everyone should definitely engage with what she has to say will definitely be revisiting
Mar 25, 2024