In methods of social change, we grapple with a permanent stress: connection versus abstraction. Connection is gradual, human, and relational. It thrives on belief, listening, and collaboration. Abstraction, then again, simplifies complexity into patterns, insights, and fashions. It’s quick, scalable, and environment friendly.
Each serve a goal, however they pull in reverse instructions. And now, with the rise of AI instruments like giant language fashions (LLMs), this stress has reached new heights. LLMs thrive on abstraction; they scale back human interplay into knowledge factors, floor patterns, and generate outputs.
Whereas LLMs should not clever within the sense of reasoning or self-awareness, they’ll function instruments that reframe, rephrase, and reorganise an individual’s concepts in ways in which really feel expressive. This could allow creativity and reflection, however let’s be clear: It’s not company. The software reshapes inputs however doesn’t make which means.
In market-based methods, the place effectivity is paramount, this may work. However in social methods, the place relationships, context, and belief are every thing, abstraction dangers dropping what makes methods actual and resilient.
This essay is a case for vigilant embrace. It asks how we are able to preserve instruments in service to relationship, not the opposite means spherical. It attracts from our nation’s expertise of the self-help group (SHG) motion and its microfinance offshoots, exams it in opposition to the brand new frontier of LLMs within the social sector, and distills a number of design guidelines for holding the work human in an age of velocity.
Connection as infrastructure
A long time in the past, India’s SHG motion reframed finance as a relationship first, and a product second. Teams fashioned by affinity; members saved collectively; guidelines emerged from context; reimbursement schedules matched rhythms of life and livelihood; and belief was the collateral. Over time, SHG–financial institution linkage turned a solution to convey formal finance into locations the place formal establishments had no legitimacy of their very own. It solely labored as a result of course of mattered.
As Aloysius Prakash Fernandez (lengthy‑time chief within the SHG motion with MYRADA and a key architect of its observe) has argued, SHGs constructed economies of connection. The time it took to kind an SHG was not friction to be eradicated, however relatively the formation and cadence of months of conferences, financial savings self-discipline, battle decision, and studying to maintain books and maintain one another accountable. That gradual work created legitimacy and resilience in order that when disaster struck, the connection cloth held.
Then got here the flip. As microfinance commercialised, a lot of the sphere shifted from SHG pondering to microfinance (MFI) pondering—from affinity to acquisition, from place to product, from presence to course of compliance. Loans turned standardised, reimbursement cycles inflexible, and progress a KPI. Pace, greed, and standardisation (to borrow Aloysius’s pithy phrasing) took what was relational and made it transactional.
The outcomes had been predictable. Reimbursement charges seemed spectacular—till they didn’t. In lots of locations, dangers had been accumulating: a number of lending with out visibility on family money flows, incentives that pushed quantity over suitability, and the gradual erosion of belief with lenders treating folks as portfolios relatively than individuals. Merchandise scaled, however belonging didn’t. The social infrastructure that had as soon as underwritten monetary inclusion was being displaced by numbers that seemed like progress.
It’s tempting to relate this merely as a narrative of ‘dangerous actors’, however that misses the deeper level. Even nicely‑which means establishments slide right here as a result of their constructions privilege the measurable: gross mortgage portfolio, on‑time reimbursement, and price to serve. The issues that make SHGs work—mutuality, possession, restore—resist instrumentation, and change into, fairly actually, much less precious.
If this sounds acquainted to these working on the intersection of LLMs and social methods, it’s as a result of we’re watching the identical movie once more.
The query, then, is that this: The place, if in any respect, do LLMs belong within the work of social change? And what can we study from the SHG/MFI shift?

LLMs and the mechanistic view of knowledge
There at the moment are many LLM-based instruments designed to summary and synthesise insights from human interactions, promising to amplify collective knowledge. In social change methods, the place sources are stretched and issues are huge, this promise is tempting and does have some strengths.
- It organises and systematises human insights into constructing blocks.
- It surfaces various views, tracing inputs again to their sources to make sure inclusion and accountability.
- It accelerates selections, providing actionable outputs at scale.
However these strengths are additionally its best weaknesses as a result of they summary the human strategy of turning messy, located conversations into neat patterns. This comes at a value.
- Loud voices and flattened complexity: They threat over-representing frequent or louder views whereas erasing nuance, dissent, and marginal views.
- Lack of relational perception: Knowledge doesn’t come up from patterns alone. It comes from the belief, stress, and emotional connection born of human interplay.
- Hole consensus: Outputs that bypass relational work could seem actionable, however they lack the belief and shared possession that give selections their energy.
The end result? Programs that look environment friendly however really feel hole as a result of instruments, frameworks, and processes sever the relational ties that make methods actual.
Current empirical proof appears to verify what we sense intuitively about these limits. When researchers systematically examined LLM reasoning capabilities by managed puzzles, they found one thing profound: As issues develop extra complicated, these fashions don’t simply battle however collapse totally. Much more telling, when complexity will increase, they start to cut back their effort, as if giving up. They discover easy options however then overthink them, exploring incorrect paths.
Maybe it is a window into the basic nature of those methods. They excel at sample matching inside acquainted territories however can not genuinely motive by novel complexity. And social change? It lives totally in that house of the brand new and the complicated, the place contexts shift, the place tradition issues, the place each group brings unprecedented challenges. If these fashions collapse when transferring discs between pegs, how can we belief them with the infinitely extra complicated work of transferring hearts, minds, and methods?
Apply the slender versus broad lens
To navigate this problem, the strain between connection and abstraction should be examined by one other dimension: slender versus broad. Whereas connection and abstraction typically really feel like irreconcilable opposites, the slender–broad lens helps bridge this hole by revealing how totally different sorts of instruments can play significant roles in social change.
- Slim instruments are particular and focused, fixing well-bounded issues.
- Extensive instruments are generalised and scalable, searching for to handle giant methods.
Combining this in a 2&occasions;2 framework provides us 4 distinct areas the place LLMs can, or can not, play a significant position.
1. Slim connection (Relational amplifiers)
- What it’s: Instruments that deepen human relationships by enhancing context-specific, focused work.
- Instance: A frontline caseworker makes use of an LLM to synthesise notes throughout a number of person visits so as to personalise their follow-ups. The LLM helps amplify reminiscence and perception, however the relationship stays human.
- Why it really works: These instruments increase human connection by surfacing insights with out changing relational work. They keep rooted within the particular, bounded context of their utility.
- Key use case: Instruments for case administration in social companies. As an illustration, LLMs assist social employees tailor interventions to particular person customers primarily based on their distinctive wants and histories.
- Key query: Does this software increase connection, or does it exchange it?
2. Extensive connection (Relational ecosystems)
- What it’s: Instruments that map and visualise relationships throughout broader ecosystems, enabling collaboration with out eroding the human work of trust-building.
- Instance: Stakeholder mapping instruments that reveal group networks and energy dynamics.
- Why it really works: Extensive connection instruments respect the complexity of human methods, serving to actors navigate and strengthen relationships with out decreasing them to transactions.
- Key use case: Community mapping for advocacy coalitions. LLMs can floor insights about overlapping efforts, potential collaborators, or areas of battle, however the work of constructing these connections stays human.
- Key query: Does this software illuminate relationships, or does it flatten them into transactions?
3. Slim abstraction (Effectivity instruments)
- What it’s: Instruments that automate repetitive, bounded duties, liberating up time for relational or contextual work.
- Instance: A grant officer makes use of an LLM to scan 100 purposes for lacking documentation or funds inconsistencies and flags information for overview, however leaves selections to people.
- Why it really works: Slim abstraction instruments keep inside well-defined parameters, guaranteeing that the abstraction doesn’t undermine human judgement or erode belief.
- Key use case: Administrative automation in nonprofits. AI can deal with routine knowledge entry or flag lacking data in grant proposals, permitting workers to deal with strategic selections and relationships.
- Key query: Has the method of abstraction eliminated mandatory particulars that deserve human consideration?
4. Extensive abstraction (Context flatteners)
- What it’s: Broad, generalised instruments that prioritise scale and effectivity, however threat erasing context and relationships.
- Instance: A philanthropic CRM software employs LLMs to rank grantees on ‘impression potential’ utilizing prior grant studies that reward well-written or funder-aligned language, not these doing contextually essential work.
- Why it fails: Extensive abstraction instruments produce outputs which are disconnected from the lived realities of the folks and methods they intention to serve. They typically impose generic options that lack native resonance or belief.
- Key threat: Coverage suggestions generated by LLMs that ignore cultural nuance, energy dynamics, or native histories.
- Key query: Does this software flatten complexity, producing options nobody really owns?
Extensive abstraction instruments fail social methods as a result of social methods are constructed on belief, context, and relationships. Change doesn’t emerge from patterns or averages; it emerges from the gradual, messy, human work of displaying up, listening, and constructing collectively.
This requires ethical discernment, cultural fluency, and the flexibility to carry house for uncertainty. Even probably the most subtle instruments should not able to this stuff. A software can not sense the distinction between a pause of resistance and a pause of reflection. It can not perceive silence or the burden behind a hesitant request.
LLMs can play a job in social change, however should keep slender, supportive, and grounded in connection. They canamplify relationships (slender connection), reveal patterns in methods (broad connection), or automate duties that don’t require human judgement (slender abstraction). However they can not exchange the relational processes that make methods actual.
Designing for a human age
The promise of LLMs is seductive. It gives velocity, effectivity, and a way of management—qualities we crave in complicated, unsure methods. But when we consider connection because the foundational infrastructure and abstraction as a software, how can we construct (and fund) accordingly?
4 clusters of observe comply with from the evaluation:
1. Placement and scope
- Maintain it slender (bounded contexts) when automating.
- Maintain it broad and human when mapping relationships.
- Keep away from broad abstraction in relational domains (welfare, justice, well being, group growth). When you should use it, deal with outputs as hints, by no means selections.
- Assume drift; design for it.
2. Course of and possession
- Course of issues. If a ‘consensus’ software removes dissent and dialogue, it’s producing hole settlement.
- Possession alerts actuality. If a choice is just not of the group however about it, count on distance and eventual resistance.
- Messiness take a look at. Did we keep within the mess to pay attention, disagree, compromise? If not, the end result could journey poorly. Consensus that bypasses restore won’t maintain.
3. Measurement and accountability
- Measure what you may whereas defending what you may’t. Construct specific guard rails in order that unmeasurable items (belief, belonging, restore) should not crowded out.
- Use AI the place failure is appropriate. Drafting, summarising, knowledge hygiene: sure. Selections about dignity, security, or entitlements: no.
- Permit override with out justification. Folks closest to the context should be free to withstand machine outputs.
- Seize moments of failure. Doc not simply technical bugs, but additionally when folks neglect act with out the software.
4. Funding and institutional observe
- Finance the foundational layers. Funds for convening, accompaniment, group formation, and comply with‑by, and never simply transactions.
- Reward stewardship, not throughput. Rejoice organisations that prune, pause, and restore, not simply people who scale.
- Create collision areas. Funders ought to host containers for connection—open‑ended gatherings the place practitioners make which means collectively, not simply report up.
- Reframe accountability. Shift from counting outputs to honouring situations: psychological security, belief density, and position readability throughout the community.
The work we do within the sector is the work of belonging, and it doesn’t scale by flattening. It scales like a forest: root by root, mycelium by mycelium, cover by cover, alive and adaptive, held collectively by relationships we can not at all times see and should always remember.
Disclaimer: IDR is funded by Rohini Nilekani Philanthropies.
—
Know extra
- Be taught about the town of Amsterdam’s failed try to interrupt a decade-long development of implementing discriminatory algorithms.
- Be taught how philanthropy can information the accountable growth of AI.
- Learn about the alternatives that AI gives to communities and networks.
- Be taught about how nonprofits in India are utilizing AI, the challenges they face, and their consolation ranges with AI instruments.