Hide table of contents

TL:DR;

EA Finland ran an intensive weekend course during spring 2025 introducing EA concepts to 20 participants across 3 Finnish cities. The format proved more accessible than our traditional 5-week program, with strong engagement metrics and opportunity for scaling up. 

Why We Created This Program

The crash course was developed as an alternative to our traditional Intro Program, targeting busy people who find it hard to commit to 5 weeks of weekly sessions. We modeled it after EA Bergen's weekend course, focusing on sparking curiosity and enthusiasm rather than comprehensive coverage of every topic.

Content and Schedule

The content was loosely based on major topics from the Intro Program, covering core concepts and cause areas without getting bogged down in thorough analysis. The course covered six 1-hour workhops. Since presentations are less information dense than texts, we prioritized engagement over exhaustive detail.

Saturday:

  • 12:00-12:30 Introduction
  • 12:30-13:30 Cost-effectiveness & Expected Value
  • 13:30-14:10 Lunch
  • 14:10-15:10 Rationality
  • 15:10-15:30 Break
  • 15:30-16:30 Moral Circle
  • 16:30-17:00 Wrap-up

Sunday:

  • 12:00-12:30 Introduction
  • 12:30-13:30 ITN & Cause Prioritization
  • 13:30-14:10 Lunch
  • 14:10-15:10 X-risks & Longtermism
  • 15:10-15:30 Break
  • 15:30-16:30 Career Planning
  • 16:30-17:00 Wrap-up

We got a fair number of compliments on the pacing, with people praising the timing of the breaks.

Key Sessions

In the introduction, we covered discussion guidelines, had an introduction round, and presented what and why EA. 

Workshop 1: The cost-effectiveness workshop underlined the vast difference between an average charity and an exceptional one, used Cassandra Xia's game to explain expected value, and included group exercises. We mainly used global health and development as examples since they rely on more intuitive utilitarian calculations.

Workshop 2: The rationality session introduced participants to rationality and cognitive biases. Participants commented that they were surprised to see their cognitive biases in action, and the conversations were especially lively. 

Workshop 3: The moral circle presentation built upon rational thinking foundations to challenge moral beliefs, using animal welfare as a clear example of moral progress through rational contemplation.

Workshop 4: The ITN workshop included a presentation about the framework and what causes 80 000 hours has ranked as most pressing. Then the participants had a chance to assess a cause of their own and to compare it to the mainstream EA causes.

Workshop 5: The X-risks & longtermism session focused mainly on AI safety. We played the short film "Writing Doom" and gave discussion prompts. Watching a video provided good structural variety while succinctly communicating common objections to AI safety. One participant complimented our courage to address AI safety frankly and directly.

Workshop 6: In the career workshop, we ran through main points from 80,000 Hours' curriculum. 

The Sunday wrap-up guided participants to think about next steps, introducing community events, the upcoming EAGx conference, and a reading group targeted at crash course graduates.

Results and Impact

20 people completed the program (though 34 signed up initially). In terms of retention, it performed similarly to our fall Intro Program.

Graduates gave an average of 8/10 for lesson clarity and estimated a 75% likelihood of learning more about EA in the next 6 months. Seven graduates signed up for career advising, reading groups, or other EA events within a month. Participants rated welcomingness at 8.73/10 and gave an 80% positive response to interest in a casual reunion.

Keeping participants engaged after the program

As with the intro program, it is important that there is a clear next step after completing the crash course to support participants’ EA journey. For the intro program we’ve organised a post program event, but that wouldn’t fit as well for the crash course. In Helsinki, where the biggest cohort was,  our volunteers organised a reading group for the book “The Precipice”, which 5/12 graduates joined. For the other groups, there weren’t sufficient participants to do the same. For Helsinki, this was a significantly higher engagement post program than for the multi-week intro programs for the past couple of years, which indicates that our goal to make the crash course spark curiosity to learn more about EA worked out. 

What Worked Well

Students proved much more receptive than working professionals—they're more energetic, curious, social, and idealistic. The crash course seemed to energize participants more than our traditional program, though it's hard to make direct comparisons. The materials were more diverse, including videos, games, exercises and presentations, which helped maintain engagement.

Areas for Improvement

Social Connection: The starting session should be especially interactive to get people into a participating mode. We should have added participants to a group chat and encouraged them to schedule follow-up conversations while present in-person.

Marketing: We need more lead time for promotion across multiple channels. Now we had only 3 weeks marketing time for the first crash course. Our Meta advertisements yielded no sign-ups, since we started them only a week before the event. Referrals from friends and posters on student dorm doors proved most effective.

“Where did you hear about this course? (Select all that apply.)”  
Friend:  939%
Poster:  730%
Telegram: 417%
Email:  313%
Kide app: 313%
street marketer: 14%
local event listing: 14%
Instagram:  14%
What’sApp:  14%
   
1 source: 1616 
2 sources: 7poster and friend or Telegram
3 sources: 00 
Total 23 responders from 3 Crash Courses  

Practical Considerations

The total cost across 3 weekend courses was approximately €2,300, including venue reservations (€733), food (€983), and marketing (€588). Future iterations could run much cheaper using free venues and more targeted marketing approaches.

The whole crash course project required 360 paid working hours plus volunteer time. Now when the materials and methods are created, the efforts may mostly concentrate in one intense week rather than spread over months like the Intro Program. Local organizers were notably less enthusiastic about running the crash course than the traditional fellowship which we have run since 2021. When asked to rate their willingness on a 10-point scale, organizers averaged 5.5 for the crash course versus 8.5 for the intro program.

Key concerns included the higher upfront effort required for an unfamiliar format, the sacrifice of personal weekend time, increased logistical overhead (food, venue reservations, marketing), and the need for larger participant numbers to justify the investment. Organizers also valued the deeper relationships and peer-to-peer dynamics possible in smaller reading groups, noting that the intro program better models how EAs actually interact in community settings. The familiarity with existing intro program routines made it feel less stressful to facilitate.

Comparison to Intro Program

The crash course provides lower information density but higher accessibility and participation. The Intro Program gives higher quality understanding but faces higher dropout rates due to the longer commitment. We might tentatively recommend the crash course for groups needing rapid growth or at risk of dying out.

Moving Forward

The crash course serves as both an awareness-spreading tool and a way to welcome people to the community. Rather than funneling people into the Intro Program, it leads directly to reading groups or other community engagement. The format may be ideal for spring terms when traditional programs struggle with participation.

For questions, slides, workshops, and other materials, please contact Sarah Bluhm or Iska Knuuttila on the Forum, or send mail to eafinland@altruismi.fi 

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under