Glenn Kessler writes the Fact Checker column for The Post.
As the father of a high school senior who suffered this spring through the angst of waiting for college acceptance notices at a time when some top schools reject more than 90 percent of applicants, I have a simple suggestion to reduce some of the craziness.
Place two limits on college applications: Students should be allowed to submit no more than 10 through the Common Application and no more than four to the eight Ivy League universities.
The Common App, which was created 35 years ago with the sensible goal of streamlining the college admissions process, currently limits students to 20 applications. But that’s too many. The ease of applying — and the fear of rejection — makes students submit to increasingly more schools.
The root of the problem, of course, is the various college ranking systems, which credit schools for their selectivity. That encourages schools to seek ways to boost the number of applications they receive. Our mailbox was flooded with college brochures just weeks after our then-sophomore son took the PSAT.
Washington University in St. Louis, for example, even shamelessly promotes the fact that, unlike most selective colleges, it requires no supplemental essays beyond the basic Common App. You just click a box and, presto, your application is submitted (for a $75 fee, of course).
The net result is that colleges are being overwhelmed with applications by highly qualified students — and turning most of them down.
Lee Coffin, undergraduate admissions dean at Tufts University, wrote in a blog post last month that 74 percent of Tufts’s nearly 20,000 applicants were deemed qualified for admission — and that 42 percent were recommended for acceptance. But in the end the university could only accept a record-low 16 percent, making it nearly as competitive as Cornell University, an Ivy school. Five years ago, Tufts accepted 26 percent of applicants.
In response, students and their parents are engaged in a Great National Freak-out. The online forums of the College Confidential Web site are filled with anger at a system that has spun out of control. “This process is insane," wrote one parent on March 24. “I’m Harvard ’84, and my son is so much more accomplished and smart than I was in high school. But he’s getting rejected from places like Cornell and Northwestern, and waitlisted at Chicago (which had a 40% admit rate back in the day)."
Even more painful were the posts of parents whose children were rejected by every single college they applied to.
Limits on the number of applications would be the first step to restoring some sanity.
Take, for example, the eight Ivy League universities. I went to Brown University as an undergraduate and Columbia University as a graduate student, and I have visited all but one of the others. They are all high-caliber universities, but they have very different strengths and cultures; they just happen to be in the same football league. Students who want to go to Dartmouth should have little reason to apply to Brown, and vice versa, unless they simply are trying to buy a brand name.
A limit of four Ivy League applications would force students to make choices and understand the differences among the schools. Congratulations to the handful of students who got accepted to all eight this year, but they can go only to one school. Because they applied to so many schools, other well-qualified candidates had to cope with rejection notices.
The Ivies could accomplish this change by having each student certify that he or she is applying to no more than three other Ivies — just as students who apply for an early decision promise to attend the school if selected.
Meanwhile, the Common App could do its part by reducing its maximum number of applications from 20 to 10. Doing so would open opportunities for many students because it would shrink the competition to the students who truly want to attend those colleges, rather than including people who willy-nilly check a box.
My wife and I worked closely with our son, who attends Thomas Jefferson High School for Science and Technology in Alexandria, to identify the colleges that best met his needs and talents. He applied to just eight and, happily, got into five of his top six choices. But we were on the edge of our seats for weeks, as things could have easily gone the other way. That’s because getting into college has largely become a lottery.
Attending the right college should not be a game of chance. Limiting the number of applications would improve the odds for everyone.
Whenever a college student asks me, a veteran high-school English educator, about the prospects of becoming a public-school teacher, I never think it’s enough to say that the role is shifting from "content expert" to "curriculum facilitator." Instead, I describe what I think the public-school classroom will look like in 20 years, with a large, fantastic computer screen at the front, streaming one of the nation’s most engaging, informative lessons available on a particular topic. The "virtual class" will be introduced, guided, and curated by one of the country’s best teachers (a.k.a. a "super-teacher"), and it will include professionally produced footage of current events, relevant excerpts from powerful TedTalks, interactive games students can play against other students nationwide, and a formal assessment that the computer will immediately score and record.
I tell this college student that in each classroom, there will be a local teacher-facilitator (called a "tech") to make sure that the equipment works and the students behave. Since the "tech" won’t require the extensive education and training of today’s teachers, the teacher’s union will fall apart, and that "tech" will earn about $15 an hour to facilitate a class of what could include over 50 students. This new progressive system will be justified and supported by the American public for several reasons: Each lesson will be among the most interesting and efficient lessons in the world; millions of dollars will be saved in reduced teacher salaries; the "techs" can specialize in classroom management; performance data will be standardized and immediately produced (and therefore "individualized"); and the country will finally achieve equity in its public school system.
"So if you want to be a teacher," I tell the college student, "you better be a super-teacher."
I used to think I was kidding, or at least exaggerating. Now I’m not so sure. When I consulted a local career counselor who is on the brink of retirement after a lifetime in the public schools, he said I was wrong about my prediction—but only about it taking 20 years. "Try five or 10," he said.
I smiled and laughed, and then suddenly stopped. I thought about how many times I had heard the phrase "teacher as facilitator" over the past year. I recalled a veteran teacher who recently said with anguish, "we used to be appreciated as experts in our field." I thought about the last time I walked into a local bookstore, when the employee asked if she could order a book for me from Amazon. Are teachers going the way of local bookstores? Suddenly I felt like the frog in the pot of water, feeling a little warm, wondering if I was going to have to jump before I retire in 20 years. Try five or 10.
I started reflecting. A decade and a half ago, I dedicated two years toward earning a master’s degree in English literature; this training included a couple of pedagogy courses, and it focused on classic literature, the nature of reading and writing, and the best ways to teach it. A decade ago, my school sent me to an Advanced Placement English conference at which I studied literary analysis for three days. As with the graduate program, I don’t remember the conference involving technology—it was simply the teacher, students, and a lot of books. Now, I don’t remember the last time I’ve attended, or even heard of, any professional-development training focused on my specific subject matter. Instead, these experiences concentrate on incorporating technology in the classroom, utilizing assessment data, or new ways of becoming a school facilitator.
When I did some research to see if it was just me sensing this transformation taking place, I was overwhelmed by the number of articles all confirming what I had suspected: The relatively recent emergence of the Internet, and the ever-increasing ease of access to web, has unmistakably usurped the teacher from the former role as dictator of subject content. These days, teachers are expected to concentrate on the "facilitation" of factual knowledge that is suddenly widely accessible.
In 2012, for example, MindShift’s Aran Levasseur wrote that "all computing devices—from laptops to tablets to smartphones—are dismantling knowledge silos and are therefore transforming the role of a teacher into something that is more of a facilitator and coach." Joshua Starr, a nationally prominent superintendent, recently told NPR, "I ask teachers all the time, if you can Google it, why teach it? And it’s already become a cliche that the teacher should transfer from being a "sage on the stage" to being "a guide on the side."
I started looking around me. Teachers like me are uploading onto the web tens of thousands of lesson plans and videos that are then being consolidated and curated by various organizations. In other words, the intellectual property that once belonged to teachers is now openly available on the Internet.
And the teachers unions don’t seem to be stopping this crowdsourcing; in fact, the American Federation of Teachers created sharemylesson.com ("By teachers, for teachers"), which says it offers more than 300,000 free resources for educators. And even though its partner, TES Connect, often charges money for its materials, the private company claims that nearly 5 million resources are downloaded from its sites weekly. Meanwhile, TeachersPayTeachers.com, an open marketplace for lesson plans and resources that launched in 2006, says it has more than 3 million users, including 1 million who signed up in the past year. Close to 1 million educators have purchased lesson plans from the site, while several other teachers are earning six figures for creating the site’s top-selling materials.
I think it used to be taboo for teachers to borrow or buy plans written by other professionals, but it seems that times are changing. Just last week, I spoke with a history teacher from Santa Maria, California, who bluntly said, "I don’t ever write my own lesson plans anymore. I just give credit to the person who did." He explained, rather reasonably, that the materials are usually inexpensive or free; are extremely well made; and often include worksheets, videos, assessments, and links to other resources. Just as his administrators request, he can focus on being a facilitator, specializing in individualized instruction.
I’ve started recognizing a common thread to the latest trends in teaching. Flipped learning, blending learning, student-centered learning, project-based learning, and even self-organized learning—they all marginalize the teacher’s expertise. Or, to put it more euphemistically, they all transform the teacher into a more facilitative role.
In "flipped learning," the student is expected to absorb the core knowledge at home by watching videos and then engage in projects, problem-solving, and critical-thinking activities at school, as facilitated by his or her teacher. Project Tomorrow’s nationwide 2013 survey found that 41 percent of administrators say "pre-service teachers should learn how to set up a flipped class model before getting a teaching credential," while 66 percent of principals say "pre-service teachers should learn to create and use video and other digital media." And once again, when the teacher relies on digital media to provide the core knowledge, his or her role will inherently shift to that of a facilitator. The University of Washington’s Center for Teaching and Learning, for example, explicitly describes "flipped learning" as a way for students to "gain control of the learning process" while "the instructors become facilitators … the instructor is there to coach and guide them."
Likewise, "blended learning"—in which students take at least part of a class online while supervised by adults—is now offered by about 70 percent of K-12 public-school districts. According the Clayton Christensen Institute—a nonprofit, nonpartisan think tank that touts "disruptive innovation"—the number of K-12 students who took an online course increased from roughly 45,000 in 2000, to more than 3 million in 2009. The institute also projects that half of all high-school classes will be delivered online by 2019.
I asked a longtime friend of mine—a high-school principal in northern California—to tell me candidly what he thought about blended learning. He said, "we’re at the point where the Internet pretty much supplies everything we need. We don’t really need teachers in the same way anymore. I mean, sure, my daughter gets some help from her teachers, but basically everything she learns—from math to band—she can get from her computer better than her teachers."
At a seminar about project-based learning, I told the presenter with an increasing sense of desperation, "You know, some of us English teachers still believe that teaching literature is still our primary job." He smirked and put his pointer finger near his thumb and said, "A very little part of your job." And I recently watched the TedTalk that won the $1 million prize at Ted2013, the one in which Sugata Mitra stated that "schools as we know them are obsolete" because the country no longer needs teachers. Here's how he envisions the classroom:
In the original idea of the "flipped classroom," it seems that the teacher was responsible for recording the lecture and posting the video online, but it’s now becoming more efficient to link to a professional video. And there are now thousands of videos from which to choose. The Kahn Academy—a nonprofit that claims to provide "a free, world-class education for anyone, anywhere"—features more than 6,500 free videos and advertises over 100,000 interactive lessons on various subjects. According to Forbes, more than 500,000 teachers worldwide use these videos, which also have over 500 million views on YouTube. Meanwhile, YouTube’s own education channel ("Where anyone, anywhere can learn or teach anything") has 1 million-plus subscribers. And about 2,000 TED talks are available to view for free online and have been seen more than 1 billion times total. The list goes on.
I recently spoke with Monica Brady-Myerov, the CEO and founder of Listen Current, a website that curates the best of public radio, including current events, and offers the three- to five-minute clips alongside a full set of lesson plans and worksheets. When I asked her about the recent boom in lesson-plan production, she said, "It’s like the wild west right now, both in terms of online resources and educational technology. It’s why I quit my job [as a veteran award-winning public radio journalist], so I could ride out west." Here's what Listen Current looks like:
I found brief solace in the idea that I could still be the professional teacher that compiles all these resources—and then I found Edmodo. Branding itself as the "Facebook for schools," Edmodo started in just 2008 and now has more than 48 million members. I signed up just to see what it was all about. Within five minutes, I found a great lesson on Romeo and Juliet by John Green (a favorite author among teens, and on the list of ’s "100 most influential people"), a Kahn Academy video, immediate access to 100 famous speeches, and a somewhat fun interactive game based on Lord of the Flies. According to EdSurge, the Edmodo CEO earlier this month said, "We want to do for teacher resources what Netflix does for movies."
Well then. At least I can organize the video lessons and put them together in a sensible order—except that Activate Instruction is already creating a free and open online tool that is "similar to Wikipedia" and will "help put resources and curriculum in one place that any teacher can use." The company even put these materials in logical "playlists"; the first one I looked at contained 11 different professional resources for teaching a specific skill, including printable worksheets, an engaging video, an essay prompt, and a final assessment. And again, this company is just getting started—Activate Instruction was announced in 2013:
I measured myself against these websites and Internet companies. It seems clear that they already have a distinct advantage over me as an individual teacher. They have more resources, more money, an entire staff of professionals, and they get to concentrate on producing their specialized content, while the teacher is—almost by default—inherently encouraged to transform into a facilitator. Some people might cringe at a "Netflix for teachers," but it’s almost impossible to deny the inherent advantages Netflix has over a local DVD store, and it’s easy to imagine the potential improvements that could happen to these modern services.
For how many more years can I compete? A dozen years ago, I proudly worked for about 20 hours to create a lesson plan that taught poetic meter through the analysis of a rap song (I remember continually rewinding the cassette in my walkman). Last week, the first lesson I saw on sharemylesson.com was a thoroughly analyzed song by Katy Perry, with a printable worksheet that featured at least 10 literary devices, along with a link to her video. ListenCurrent.com gives me immediate access to public-radio clips that took me hours to accumulate just a few months ago. I may not use Edmodo or anything like it this year, but I also didn’t use Facebook in its first few years—or Amazon, or cell phones, or even ATM machines. Isn’t it probable that this educational technology is going to be overwhelmingly awesome in 20 years? I hear the career counselor’s voice: "Try five or 10."
I think to myself: These resources are already good for education, and they’re only getting better. Part of me is really excited that in two decades, the giant interactive classroom computer screen that I foresaw is going to be far more sophisticated than I can possibly imagine. Why should I stand in the way of crowdsourced lesson plans and professionally edited video tutorials? Shouldn’t I stop trying to compete as an individual "sage on the stage," appreciate the modern efficiency of today’s resources, and re-invest my time as their enthusiastic "guide on the side"?
When I told the school’s golf coach about flipped learning, I explained that it would be as if he asked the kids to go home and watch YouTube videos that teach proper mechanics and then practice those skills under his supervision on the course. He laughed and answered, "Oh, we should absolutely do that. Hank Haney’s video’s are way better than anything I can show them."
And if I compete with Hank Haney, shouldn’t I Hank Haney? In other words, if I think my lesson plans or video tutorials rival some of the best on the Internet (for now), shouldn’t I be trying to make six figures on the open marketplace at teacherspayteachers.com or as a curriculum designer for a private company? The dilemma intensifies when I suspect that un-credentialed "techs" might bust the teacher unions in 20 years ("try five or 10").
I looked through the current trends for some sign that the future classroom I envisioned won’t be realized within 20 years. I read Terrance Ross’s analysis of the Bridge International Academies and how their "scripted instruction," combined with technology and statistical feedback, has efficiently earned revenue while improving education in Kenya. Fast Company put the company on the list of "The World’s Top 10 Innovative Companies in Education," citing the fact that it’s already serving over 110,000 students, is significantly outperforming neighboring schools in both reading and math, and plans on educating 10 million students by 2025.
In a similar vein, live-streaming and other technology are also allowing some modern churches to move toward a "multisite" format, one in which a single pastor can broadcast his sermons to satellite churches guided by pastors who—this might sound familiar—concentrate on the facilitation of a common itinerary. Ed Stetzer recently wrote on ChristianityToday.com that "multisite is the new normal," and later explained, "it's easier to create another extension site than it is to create another faithful pastor who is a great communicator … it's easier to start a campus and beam my sermons to other locales than it is to raise up leaders and laypeople."
And as I mentioned earlier, last night I watched Sugata Mitra earn a standing ovation, $1 million, and a partnership with Microsoft for his TedTalk that declared that the "future of learning" is a "school built in the cloud"—one that doesn’t require teachers. It seems fitting that I watched the speech from my laptop, and that Mitra is a former computer-science teacher. Last November, Newcastle University opened the first "global hub" based on Mitra’s research, which suggests that children in self-organized learning environments "can learn almost anything by themselves" (and a computer).
This morning I spoke with a well-respected high-school teacher who supervises a blended course in digital photography. The course is mostly taught online, but students meet once every two weeks in the classroom. "So in five years, if a student has five teachers using this blended-learning style, they can just stay home the entire semester?" I asked. Apparently they could. And that, it seems, is homeschooling—with the high school’s resources.
I wonder why larger discussions related to these trends aren’t happening with greater urgency, if they’re happening at all. How hot does the water have to get before the best teachers start jumping for jobs in the private sector? As local communities and school districts nationwide commit to blended-learning programs, are they considering the long-term ramifications to the nature of their classrooms? Does the American Federation of Teachers know that, as its teachers upload their lesson plans into the cloud, they might be helping build an entirely different school, ones with self-organized learning environments instead of teachers?
I don’t have many answers in this brave new world, but I feel like I can draw one firm line. There is a profound difference between a local expert teacher using the Internet and all its resources to supplement and improve his or her lessons, and a teacher facilitating the educational plans of massive organizations. Why isn’t this line being publicly and sharply delineated, or even generally discussed? This line should be rigorously guarded by those who want to keep education professionals in the center of each classroom. Those calling for teachers to "transform their roles," regardless of motive or intentionality, are quietly erasing this line—effectively deconstructing the role of the teacher as it’s always been known.
Meanwhile, back on my campus, I wonder about the advice I should give a new teacher. Should I encourage this aspiring educator to fight for his or her role as the local expert, or simply get good at facilitating the best lessons available? Should I assure this person about my union and the notion of tenure, or should I urgently encourage him or her to create a back-up plan?
And when I think back to the original discussion, I wonder what I’m supposed to tell the college graduates who ask about earning a teaching credential. Because while I used to think I was scaring the youngster with my 20-year predictions, now I’m afraid I’m giving them false hope.
A brash tech entrepreneur thinks he can reinvent higher education by stripping it down to its essence, eliminating lectures and tenure along with football games, ivy-covered buildings, and research libraries. What if he's right?
On a Friday morning in April, I strapped on a headset, leaned into a microphone, and experienced what had been described to me as a type of time travel to the future of higher education. I was on the ninth floor of a building in downtown San Francisco, in a neighborhood whose streets are heavily populated with winos and vagrants, and whose buildings host hip new businesses, many of them tech start-ups. In a small room, I was flanked by a publicist and a tech manager from an educational venture called the Minerva Project, whose founder and CEO, the 39-year-old entrepreneur Ben Nelson, aims to replace (or, when he is feeling less aggressive, “reform") the modern liberal-arts college.
Minerva is an accredited university with administrative offices and a dorm in San Francisco, and it plans to open locations in at least six other major world cities. But the key to Minerva, what sets it apart most jarringly from traditional universities, is a proprietary online platform developed to apply pedagogical practices that have been studied and vetted by one of the world’s foremost psychologists, a former Harvard dean named Stephen M. Kosslyn, who joined Minerva in 2012.
Nelson and Kosslyn had invited me to sit in on a test run of the platform, and at first it reminded me of the opening credits of The Brady Bunch: a grid of images of the professor and eight “students" (the others were all Minerva employees) appeared on the screen before me, and we introduced ourselves. For a college seminar, it felt impersonal, and though we were all sitting on the same floor of Minerva’s offices, my fellow students seemed oddly distant, as if piped in from the International Space Station. I half expected a packet of astronaut ice cream to float by someone’s face.
Within a few minutes, though, the experience got more intense. The subject of the class—one in a series during which the instructor, a French physicist named Eric Bonabeau, was trying out his course material—was inductive reasoning. Bonabeau began by polling us on our understanding of the reading, a Nature article about the sudden depletion of North Atlantic cod in the early 1990s. He asked us which of four possible interpretations of the article was the most accurate. In an ordinary undergraduate seminar, this might have been an occasion for timid silence, until the class’s biggest loudmouth or most caffeinated student ventured a guess. But the Minerva class extended no refuge for the timid, nor privilege for the garrulous. Within seconds, every student had to provide an answer, and Bonabeau displayed our choices so that we could be called upon to defend them.
Bonabeau led the class like a benevolent dictator, subjecting us to pop quizzes, cold calls, and pedagogical tactics that during an in-the-flesh seminar would have taken precious minutes of class time to arrange. He split us into groups to defend opposite propositions—that the cod had disappeared because of overfishing, or that other factors were to blame. No one needed to shuffle seats; Bonabeau just pushed a button, and the students in the other group vanished from my screen, leaving my three fellow debaters and me to plan, using a shared bulletin board on which we could record our ideas. Bonabeau bounced between the two groups to offer advice as we worked. After a representative from each group gave a brief presentation, Bonabeau ended by showing a short video about the evils of overfishing. (“Propaganda," he snorted, adding that we’d talk about logical fallacies in the next session.) The computer screen blinked off after 45 minutes of class.
The system had bugs—it crashed once, and some of the video lagged—but overall it worked well, and felt decidedly unlike a normal classroom. For one thing, it was exhausting: a continuous period of forced engagement, with no relief in the form of time when my attention could flag or I could doodle in a notebook undetected. Instead, my focus was directed relentlessly by the platform, and because it looked like my professor and fellow edu-nauts were staring at me, I was reluctant to ever let my gaze stray from the screen. Even in moments when I wanted to think about aspects of the material that weren’t currently under discussion—to me these seemed like moments of creative space, but perhaps they were just daydreams—I felt my attention snapped back to the narrow issue at hand, because I had to answer a quiz question or articulate a position. I was forced, in effect, to learn. If this was the education of the future, it seemed vaguely fascistic. Good, but fascistic.
Minerva, which operates for profit, started teaching its inaugural class of 33 students this month. To seed this first class with talent, Minerva gave every admitted student a full-tuition scholarship of $10,000 a year for four years, plus free housing in San Francisco for the first year. Next year’s class is expected to have 200 to 300 students, and Minerva hopes future classes will double in size roughly every year for a few years after that.
Those future students will pay about $28,000 a year, including room and board, a $30,000 savings over the sticker price of many of the schools—the Ivies, plus other hyperselective colleges like Pomona and Williams—with which Minerva hopes to compete. (Most American students at these colleges do not pay full price, of course; Minerva will offer financial aid and target middle-class students whose bills at the other schools would still be tens of thousands of dollars more per year.) If Minerva grows to 2,500 students a class, that would mean an annual revenue of up to $280 million. A partnership with the Keck Graduate Institute in Claremont, California, allowed Minerva to fast-track its accreditation, and its advisory board has included Larry Summers, the former U.S. Treasury secretary and Harvard president, and Bob Kerrey, the former Democratic senator from Nebraska, who also served as the president of the New School, in New York City.
Nelson’s long-term goal for Minerva is to radically remake one of the most sclerotic sectors of the U.S. economy, one so shielded from the need for improvement that its biggest innovation in the past 30 years has been to double its costs and hire more administrators at higher salaries.
The paradox of undergraduate education in the United States is that it is the envy of the world, but also tremendously beleaguered. In that way it resembles the U.S. health-care sector. Both carry price tags that shock the conscience of citizens of other developed countries. They’re both tied up inextricably with government, through student loans and federal research funding or through Medicare. But if you can afford the Mayo Clinic, the United States is the best place in the world to get sick. And if you get a scholarship to Stanford, you should take it, and turn down offers from even the best universities in Europe, Australia, or Japan. (Most likely, though, you won’t get that scholarship. The average U.S. college graduate in 2014 carried $33,000 of debt.)
Financial dysfunction is only the most obvious way in which higher education is troubled. In the past half millennium, the technology of learning has hardly budged. The easiest way to picture what a university looked like 500 years ago is to go to any large university today, walk into a lecture hall, and imagine the professor speaking Latin and wearing a monk’s cowl. The most common class format is still a professor standing in front of a group of students and talking. And even though we’ve subjected students to lectures for hundreds of years, we have no evidence that they are a good way to teach. (One educational psychologist, Ludy Benjamin, likens lectures to Velveeta cheese—something lots of people consume but no one considers either delicious or nourishing.)
In recent years, other innovations in higher education have preceded Minerva, most famously massive open online courses, known by the unfortunate acronym MOOCs. Among the most prominent MOOC purveyors are Khan Academy, the brainchild of the entrepreneur Salman Khan, and Coursera, headed by the Stanford computer scientists Andrew Ng and Daphne Koller. Khan Academy began as a way to tutor children in math, but it has grown to include a dazzling array of tutorials, some very effective, many on technical subjects. Coursera offers college-level classes for free (you can pay for premium services, like actual college credit). There can be hundreds of thousands of students in a single course, and millions are enrolled altogether. At their most basic, these courses consist of standard university lectures, caught on video.
But Minerva is not a MOOC provider. Its courses are not massive (they’re capped at 19 students), open (Minerva is overtly elitist and selective), or online, at least not in the same way Coursera’s are. Lectures are banned. All Minerva classes take the form of seminars conducted on the platform I tested. The first students will by now have moved into Minerva’s dorm on the fifth floor of a building in San Francisco’s Nob Hill neighborhood and begun attending class on Apple laptops they were required to supply themselves.
Each year, according to Minerva’s plan, they’ll attend university in a different place, so that after four years they’ll have the kind of international experience that other universities advertise but can rarely deliver. By 2016, Berlin and Buenos Aires campuses will have opened. Likely future cities include Mumbai, Hong Kong, New York, and London. Students will live in dorms with two-person rooms and a communal kitchen. They’ll also take part in field trips organized by Minerva, such as a tour of Alcatraz with a prison psychologist. Minerva will maintain almost no facilities other than the dorm itself—no library, no dining hall, no gym—and students will use city parks and recreation centers, as well as other local cultural resources, for their extracurricular activities.
The professors can live anywhere, as long as they have an Internet connection. Given that many academics are coastal-elite types who refuse to live in places like Evansville, Indiana, geographic freedom is a vital part of Minerva’s faculty recruitment.
The student body could become truly global, in part because Minerva’s policy is to admit students without regard to national origin, thus catering to the unmet demand of, say, prosperous Chinese and Indians and Brazilians for American-style liberal-arts education.
The Minerva boast is that it will strip the university experience down to the aspects that are shown to contribute directly to student learning. Lectures, gone. Tenure, gone. Gothic architecture, football, ivy crawling up the walls—gone, gone, gone. What’s left will be leaner and cheaper. (Minerva has already attracted $25 million in capital from investors who think it can undercut the incumbents.) And Minerva officials claim that their methods will be tested against scientifically determined best practices, unlike the methods used at other universities and assumed to be sound just because the schools themselves are old and expensive. Yet because classes have only just begun, we have little clue as to whether the process of stripping down the university removes something essential to what has made America’s best colleges the greatest in the world.
Minerva will, after all, look very little like a university—and not merely because it won’t be accessorized in useless and expensive ways. The teaching methods may well be optimized, but universities, as currently constituted, are only partly about classroom time. Can a school that has no faculty offices, research labs, community spaces for students, or professors paid to do scholarly work still be called a university?
If Minerva fails, it will lay off its staff and sell its office furniture and never be heard from again. If it succeeds, it could inspire a legion of entrepreneurs, and a whole category of legacy institutions might have to liquidate. One imagines tumbleweeds rolling through abandoned quads and wrecking balls smashing through the windows of classrooms left empty by students who have plugged into new online platforms.
The decor in the lobby of the Minerva office building nods to the classical roots of education: enormous Roman statues dominate. (Minerva is the Roman goddess of wisdom.) But where Minerva’s employees work, on the ninth floor, the atmosphere is pure business, in a California-casual sort of way. Everyone, including the top officers of the university, works at open-plan stations. I associate scholars’ offices with chalk dust, strewn papers, and books stacked haphazardly in contravention of fire codes. But here, I found tidiness.
One of the Minerva employees least scholarly in demeanor is its founder, chief executive, and principal evangelist. Ben Nelson attended the University of Pennsylvania’s Wharton School as an undergraduate in the late 1990s and then had no further contact with academia before he began incubating Minerva, in 2010. His résumé’s main entry is his 10-year stint as an executive at Snapfish, an online photo service that allows users to print pictures on postcards and in books.
Nelson is curly-haired and bespectacled, and when I met him he wore a casual button-down shirt with no tie or jacket. His ambition to reform academia was born of his own undergraduate experience. At Wharton, he was dissatisfied with what he perceived as a random barrage of business instruction, with no coordination to ensure that he learned bedrock skills like critical thinking. “My entire critique of higher education started with curricular reform at Penn," he says. “General education is nonexistent. It’s effectively a buffet, and when you have a noncurated academic experience, you effectively don’t get educated. You get a random collection of information. Liberal-arts education is about developing the intellectual capacity of the individual, and learning to be a productive member of society. And you cannot do that without a curriculum."
Students begin their Minerva education by taking the same four “Cornerstone Courses," which introduce core concepts and ways of thinking that cut across the sciences and humanities. These are not 101 classes, meant to impart freshman-level knowledge of subjects. (“The freshman year [as taught at traditional schools] should not exist," Nelson says, suggesting that MOOCs can teach the basics. “Do your freshman year at home.") Instead, Minerva’s first-year classes are designed to inculcate what Nelson calls “habits of mind" and “foundational concepts," which are the basis for all sound systematic thought. In a science class, for example, students should develop a deep understanding of the need for controlled experiments. In a humanities class, they need to learn the classical techniques of rhetoric and develop basic persuasive skills. The curriculum then builds from that foundation.
Nelson compares this level of direction favorably with what he found at Penn (curricular disorder), and with what one finds at Brown (very few requirements) or Columbia (a “great books" core curriculum). As Minerva students advance, they choose one of five majors: arts and humanities, social sciences, computational sciences, natural sciences, or business.
In academic circles, where overt competition between institutions is a serious breach of etiquette, Nelson is a bracing presence. (Imagine the president of Columbia telling the assembled presidents of other Ivy League schools, as Nelson sometimes tells his competitors, “Our goal is not to put you out of business; it is to lead you. It is to show you that there is a better way to do what you are doing, and for you to follow us.")
The other taboo Nelson ignores is acknowledgment of profit motive. “For-profit in higher education equates to evil," Nelson told me, noting that most for-profit colleges are indeed the sort of disreputable degree mills that wallpaper the Web with banner ads. “As if nonprofits aren’t money-driven!" he howled. “They’re just corporations that dodge their taxes." (See “The Law-School Scam.")
Minerva is built to make money, but Nelson insists that its motives will align with student interests. As evidence, Nelson points to the fact that the school will eschew all federal funding, to which he attributes much of the runaway cost of universities. The compliance cost of taking federal financial aid is about $1,000 per student—a tenth of Minerva’s tuition—and the aid wouldn’t be of any use to the majority of Minerva’s students, who will likely come from overseas.
Subsidies, Nelson says, encourage universities to enroll even students who aren’t likely to thrive, and to raise tuition, since federal money is pegged to costs. These effects pervade higher education, he says, but they have nothing to do with teaching students. He believes Minerva would end up hungering after federal money, too, if it ever allowed itself to be tempted. Instead, like Ulysses, it will tie itself to the mast and work with private-sector funding only. “If you put a drug"—federal funds—“into a system, the system changes itself to fit the drug. If [Minerva] took money from the government, in 20 years we’d be majority American, with substantially higher tuition. And as much as you try to create barriers, if you don’t structure it to be mission-oriented, that’s the way it will evolve."
When talking about Minerva’s future, Nelson says he thinks in terms of the life spans of universities—hundreds of years as opposed to the decades of typical corporate time horizons. Minerva’s very founding is a rare event. “We are now building an institution that has not been attempted in over 100 years, since the founding of Rice"—the last four-year liberal-arts-based research institution founded in this country. It opened in 1912 and now charges $53,966 a year.
So far, Minerva has hired its deans, who will teach all the courses for this inaugural class. It will hire rank-and-file faculty later in the year. One of Minerva’s main strategies is to lure a few prominent scholars from existing institutions. Other “new" universities, especially fantastically wealthy ones like King Abdullah University of Science and Technology, in Saudi Arabia, have attempted a similar strategy—at times with an almost cargocult-like confidence that filling their labs and offices with big-shot professors will turn the institutions themselves into important players.
Among the bigger shots hired by Minerva is Eric Bonabeau, the dean of computational sciences, who taught the seminar I participated in. Bonabeau, a physicist who has worked in academia and in business, studies the mathematics of swarming behavior (of bees, fish, robots), and his research helped inspire Michael Crichton’s terrible thriller Prey. Diane Halpern, a prominent psychologist, signed on this year as the dean of social sciences.
Minerva’s first major hire, Stephen M. Kosslyn, is a man I met in the fall of 1999, when I went to have my head examined. Kosslyn taught cognitive psychology and neuroscience for 32 years at Harvard, and during my undergraduate years I visited his lab and earned a few dollars here and there as one of his guinea pigs. The studies usually involved sticking my head in an fMRI machine so he and his researchers could record activity in my brain and observe which parts fired when.
Around that time, Kosslyn’s lab made news because it began to show how “mental imagery"—the experience of seeing things in your mind’s eye—really works. (One study involved putting volunteers into fMRI machines and asking them to hold an image of a cat in their head for as long as possible. You can try this exercise now. If you’re especially good at concentrating, the cat might vanish in a matter of a few seconds, as soon as your brain—distractible as a puppy—comes up with another object of attention.) Kosslyn served as Harvard’s dean of social sciences from 2008 to 2010, then spent two years at Stanford as the director of its Center for Advanced Study in the Behavioral Sciences. In 2013, after a few months of contract work for Minerva, he resigned from Stanford and joined Minerva as its founding dean.
Kosslyn speaks softly and slowly, with little emotional affect. Bald and bearded, he has an owlish stare, and at times during my recent conversations with him, he seemed to be scanning my brain with his eyes. For purposes of illustration (and perhaps also amusement), he will ask you to perform some cognitive task, then wait patiently while you do it—explain a concept, say, or come up with an argument—before telling you matter-of-factly what your mind just did. When talking with him, you often feel as though your brain is a machine, and his job is to know how it works better than it knows itself.
He spent much of his first year at Minerva surveying the literature on education and the psychology of learning. “We have numerous sound, reproducible experiments that tell us how people learn, and what teachers can do to improve learning." Some of the studies are ancient, by the standards of scientific research—and yet their lessons are almost wholly ignored.
For example, he points to a 1972 study by Fergus I. M. Craik and Robert S. Lockhart in The Journal of Verbal Learning and Verbal Behavior, which shows that memory of material is enhanced by “deep" cognitive tasks. In an educational context, such tasks would include working with material, applying it, arguing about it (rote memorization is insufficient). The finding is hardly revolutionary, but applying it systematically in the classroom is. Similarly, research shows that having a pop quiz at the beginning of a class and (if the students are warned in advance) another one at a random moment later in the class greatly increases the durability of what is learned. Likewise, if you ask a student to explain a concept she has been studying, the very act of articulating it seems to lodge it in her memory. Forcing students to guess the answer to a problem, and to discuss their answers in small groups, seems to make them understand the problem better—even if they guess wrong.
This approach does have its efficiencies. In a normal class, a pop quiz might involve taking out paper and pencils, not to mention eye-rolls from students. On the Minerva platform, quizzes—often a single multiple-choice question—are over and done in a matter of seconds, with students’ answers immediately logged and analyzed. Professors are able to sort students instantly, and by many metrics, for small-group work—perhaps pairing poets with business majors, to expose students who are weak in a particular class to the thought processes of their stronger peers. Some claim that education is an art and a science. Nelson has disputed this: “It’s a science and a science."
Nelson likes to compare this approach to traditional seminars. He says he spoke to a prominent university president—he wouldn’t say which one—early in the planning of Minerva, and he found the man’s view of education, in a word, faith-based. “He said the reason elite university education was so great was because you take an expert in the subject, plus a bunch of smart kids, you put them in a room and apply pressure—and magic happens," Nelson told me, leaning portentously on that word. “That was his analysis. They’re trying to sell magic! Something that happens by accident! It sure didn’t happen when I was an undergrad."
To Kosslyn, building effective teaching techniques directly into the platform gives Minerva a huge advantage. “Typically, the way a professor learns to teach is completely haphazard," he says. “One day the person is a graduate student, and the next day, a professor standing up giving a lecture, with almost no training." Lectures, Kosslyn says, are pedagogically unsound, although for universities looking to trim budgets they are at least cost-effective, with one employee for dozens or hundreds of tuition-paying students. “A great way to teach," Kosslyn says drily, “but a terrible way to learn."
I asked him whether, at Harvard and Stanford, he attempted to apply any of the lessons of psychology in the classroom. He told me he could have alerted colleagues to best practices, but they most likely would have ignored them. “The classroom time is theirs, and it is sacrosanct," he says. The very thought that he might be able to impose his own order on it was laughable. Professors, especially tenured ones at places like Harvard, answer to nobody.
It occurred to me that Kosslyn was living the dream of every university administrator who has watched professors mulishly defy even the most reasonable directives. Kosslyn had powers literally no one at Harvard—even the president—had. He could tell people what to do, and they had to do it.
There were moments, during my various conversations with Kosslyn and Nelson, when I found I couldn’t wait for Minerva’s wrecking ball to demolish the ivory tower. The American college system is a frustrating thing—and I say this as someone who was a satisfied customer of two undergraduate institutions, Deep Springs College (an obscure but selective college in the high desert of California) and Harvard. At Deep Springs, my classes rarely exceeded five students. At Harvard, I went to many excellent lectures and took only one class with fewer than 10 students. I didn’t sleepwalk or drink my way through either school, and the education I received was well worth the $16,000 a year my parents paid, after scholarships.
But the Minerva seminar did bring back memories of many a pointless, formless discussion or lecture, and it began to seem obvious that if Harvard had approached teaching with a little more care, it could have improved the seminars and replaced the worst lectures with something else.
When Eric Bonabeau assigned the reading for his class on induction, he barely bothered to tell us what induction was, or how it related to North Atlantic cod. When I asked him afterward about his decision not to spend a session introducing the concept, he said the Web had plenty of tutorials about induction, and any Minerva student ought to be able to learn the basics on her own time, in her own way. Seminars are for advanced discussion. And, of course, he was right.
Minerva’s model, Nelson says, will flourish in part because it will exploit free online content, rather than trying to compete with it, as traditional universities do. A student who wants an introductory economics course can turn to Coursera or Khan Academy. “We are a university, and a MOOC is a version of publishing," Nelson explains. “The reason we can get away with the pedagogical model we have is because MOOCs exist. The MOOCs will eventually make lectures obsolete."
Indeed, the more I looked into Minerva and its operations, the more I started to think that certain functions of universities have simply become less relevant as information has become more ubiquitous. Just as learning to read in Latin was essential before books became widely available in other languages, gathering students in places where they could attend lectures in person was once a necessary part of higher education. But by now books are abundant, and so are serviceable online lectures by knowledgeable experts.
On the other hand, no one yet knows whether reducing a university to a smooth-running pedagogical machine will continue to allow scholarship to thrive—or whether it will simply put universities out of business, replace scholar-teachers with just teachers, and retard a whole generation of research. At any great university, there are faculty who are terrible at teaching but whose work drives their field forward with greater momentum than the research of their classroom-competent colleagues. Will there be a place for such people at Minerva—or anywhere, if Minerva succeeds?
Last spring, when universities began mailing out acceptance letters and parents all over the country shuddered as the reality of tuition bills became more concrete, Minerva sent 69 offers. Thirty-three students decided to enroll, a typical percentage for a liberal-arts school. Nelson told me Minerva would admit students without regard for diversity or balance of gender.
Applicants to Minerva take a battery of online quizzes, including spatial-reasoning tests of the sort one might find on an IQ test. SATs are not considered, because affluent students can boost their scores by hiring tutors. (“They’re a good way of determining how rich a student is," Nelson says.) If students perform well enough, Minerva interviews them over Skype and makes them write a short essay during the interview, to ensure that they aren’t paying a ghost writer. “The top 30 applicants get in," he told me back in February, slicing his hand through the air to mark the cutoff point. For more than three years, he had been proselytizing worldwide, speaking to highschool students in California and Qatar and Brazil. In May, he and the Minerva deans made the final chop.
Of the students who enrolled, slightly less than 20 percent are American*—a percentage much higher than anticipated. (Nelson ultimately expects as many as 90 percent of the students to come from overseas.) Perhaps not surprisingly, the students come disproportionately from unconventional backgrounds— nearly one-tenth are from United World Colleges, the chain of cosmopolitan hippie high schools that brings together students from around the globe in places like Wales, Singapore, and New Mexico.
In an oddly controlling move for a university, Minerva asked admitted students to run requests for media interviews by its public-relations department. But the university gave me the names of three students willing to speak.
When I got through to Ian Van Buskirk of Marietta, Georgia, he was eager to tell me about a dugout canoe that he had recently carved out of a two-ton oak log, using only an ax, an adze, and a chisel, and that he planned to take on a maiden voyage in the hour after our conversation. He told me he would have attended Duke University if Minerva hadn’t come calling, but he said it wasn’t a particularly difficult decision, even though Minerva lacks the prestige and 176-year history of Duke. “There’s no reputation out there," he told me. “But that means we get to make the reputation ourselves. I’m creating it now, while I’m talking to you."
Minerva had let him try out the same online platform I did, and Van Buskirk singled out the “level of interaction and intensity" as a reason for attending. “It took deep concentration," he said. “It’s not some lecture class where you can just click ‘record’ on your tape." He said the focus required was similar to the mind-set he’d needed when he made his first hacks into his oak log, which could have cracked, rendering it useless.
Another student, Shane Dabor, of the small city of Brantford, Ontario, had planned to attend Canada’s University of Waterloo or the University of Toronto. But his experiences with online learning and a series of internships had led him to conclude that traditional universities were not for him. “I already had lots of friends at university who weren’t learning anything," he says. “Both options seemed like a wager, and I chose this one."
A young Palestinian woman, Rana Abu Diab, of Silwan, in East Jerusalem, described how she had learned English through movies and books (a translation of the Norwegian philosophical novel Sophie’s World was a particular favorite). “If I had relied on my school, I would not be able to have a two-minute conversation," she told me in fluent English. During a year studying media at Birzeit University, in Ramallah, she heard about Minerva and decided to scrap her other academic plans and focus on applying there. For her, the ability to study overseas on multiple continents, and get an American-style liberalarts education in the process, was irresistible. “I want to explore everything and learn everything," she says. “And that’s what Minerva is offering: an experience that lets you live multiple lives and learn not just your concentration but how to think." Minerva admitted her, and, like a third of her classmates in the founding class, she received a supplemental scholarship, which she could use to pay for her computer and health insurance.
Two students told me that they had felt a little trepidation, and a need to convince themselves or their parents that Minerva wasn’t just a moneymaking scheme. Minerva had an open house weekend for admitted students, and (perhaps ironically) the in-person interactions with Minerva faculty and staff helped assure them that the university was legit. The students all now say they’re confident in Minerva—although of course they can leave whenever they like, with little lost but time.
people consider universities sacred places, and they might even see professors’ freedom to be the fallible sovereigns of their own classrooms as a necessary part of what makes a university special. To these romantics, universities are havens from a world dominated by orthodoxy, money, and quotidian concerns. Professors get to think independently, and students come away molded by the total experience—classes, social life, extracurriculars—that the university provides. We spend the rest of our lives chasing mates, money, and jobs, but at university we enjoy the liberty to indulge aimless curiosity in subjects we know nothing about, for purposes unrelated to efficiency or practicality.
Minerva is too young to have attracted zealous naysayers, but it’s safe to assume that the people with this disposition toward the university experience are least likely to be enthusiastic about Minerva and other attempts to revolutionize education through technical innovation. MOOCs are beloved by those too poor for a traditional university, as well as those who like to dabble, and those who like to learn in their pajamas. And MOOCs are not to be knocked: for a precocious Malawian peasant girl who learns math through free lessons from Khan Academy, the new Web resources can change her life. But the dropout rate for online classes is about 95 percent, and they skew strongly toward quantitative disciplines, particularly computer science, and toward privileged male students. As Nelson is fond of pointing out, however, MOOCs will continue to get better, until eventually no one will pay Duke or Johns Hopkins for the possibility of a good lecture, when Coursera offers a reliably great one, with hundreds of thousands of five-star ratings, for free.
The question remains as to whether Minerva can provide what traditional universities offer now. Kosslyn’s project of efficiently cramming learning into students’ brains is preferable to failing to cram in anything at all. And it is designed to convey not just information, as most MOOCs seem to, but whole mental tool kits that help students become morethoughtful citizens. But defenders of the traditional university see efficiency as a false idol.
“Like other things that are going on now in higher ed, Minerva brings us back to first principles," says Harry R. Lewis, a computer-science professor who was the dean of Harvard’s undergraduate college from 1995 to 2003. What, he asks, does it mean to be educated? Perhaps the process of education is a profound one, involving all sorts of leaps in maturity that do not show up on a Kosslyn-style test of pedagogical efficiency. “I’m sure there’s a market for people who want to be more efficiently educated," Lewis says. “But how do you improve the efficiency of growing up?"
He warns that online-education innovations tend to be oversold. “They seem to want to re-create the School of Athens in every little hamlet on the prairie—and maybe they’ll do that," he told me. “But part of the process of education happens not just through good pedagogy but by having students in places where they see the scholars working and plying their trades."
“Content is about to become free and ubiquitous," Koller said, an especially worrying comment for deans who still thought the job of their universities was to teach “content." The institutions “that are going to survive are the ones that reimagine themselves in this new world."
Nelson ticked off the advantages he had over legacy institutions: the spryness of a well-funded start-up, a student body from all over the world, and deals for faculty (they get to keep their own intellectual property, rather than having to hand over lucrative patents to, say, Stanford) that are likely to make Minerva attractive.
Yet in some ways, the worst possible outcome would be for U.S. higher education to accept Minerva as its model and dismantle the old universities before anyone can really be sure that it offers a satisfactory replacement. During my conversations with the three Minerva students, I wanted to ask whether they were confident Minerva would give them all the wonderful intangibles and productive diversions that Harry Lewis found so important. But then I remembered what I was like as a teenager headed off to college, so ignorant of what college was and what it could be, and so reliant on the college itself to provide what I’d need in order to get a good education. These three young students were more resourceful than I was, and probably more deliberate in their choice of college. But they were newcomers to higher education, and asking them whether their fledgling alma mater could provide these things seemed akin to asking the passengers on the Mayflower how they liked America as soon as their feet touched Plymouth Rock.
Lewis is certainly right when he says that Minerva challenges the field to return to first principles. But of course the conclusions one reaches might not be flattering to traditional colleges. One possibility is that Minerva will fail because a college degree, for all the high-minded talk of liberal education— of lighting fires and raising thoughtful citizens—is really just a credential, or an entry point to an old-boys network that gets you your first job and your first lunch with the machers at your alumni club. Minerva has no alumni club, and if it fails for this reason, it will look naive and idealistic, a bet on the inherent value of education in a world where cynicism gets better odds.
At the university-administrator conference where Nelson spoke in February, I sat at a table with an affable bunch of deans from Australia and the United States. They listened attentively, first with interest and then with growing alarm. Toward the end of the conversation, the sponsoring organization’s president asked the panelists what they expected to be said at a similar event in 2017, on the same topic of innovative online education. (“Assuming we’re still in business," a dean near me whispered to no one in particular.)
Daphne Koller said she expected Coursera to have grown in offerings into a university the size of a large state school—after having started from scratch in 2012. Even before Nelson gave his answer, I noticed some audience members uncomfortably shifting their weight. The stench of fear made him bold.
The Collegiate Learning Assessment (CLA) is a standardized testing initiative in United States higher educational evaluation and assessment. It uses a "value-added" outcome model to examine a college or university's contribution to student learning which relies on the institution, rather than the individual student, as the primary unit of analysis. The CLA measures are designed to test for critical thinking, analytic reasoning, problem solving, and written communication skills. The assessment consists of open-ended questions, is administered to students online, and controls for incoming academic ability. An institution's average score on the CLA measures correlates highly with the institution's average SAT score (r = 0.90).[1] Institutional results are not published.
The CLA was first launched in 2000 by the Council for Aid to Education (CAE), a national nonprofit organization based in New York City. Rather than testing for specific content knowledge gained in particular courses or majors, the intent was to assess “the collective and cumulative result of what takes place or does not take place over the four to six years of undergraduate education in and out of the classroom.[2] Of the entire test, the most well-developed and sophisticated part is its performance task component,[3] in which students are given ninety minutes to respond to a writing prompt that is associated with a set of background documents. The testing materials, including the documents, are accessed through a computer. The CAE has published several examples of representative performance tasks, one of which is described below (from Academically Adrift: Limited Learning on College Campuses):
The CAE also publishes its scoring rubric.[5] The design of both the prompts and the criteria for evaluation demonstrates the CLA's focus on complex, holistic, real-world problem-solving as a measurement of high level learning. As a result, the argument goes, institutions that attempt to “teach to the test" will be schools that teach students “to think critically, reason analytically, solve problems, and communicate clearly."[6]
Again according to Academically Adrift, there are four primary criticisms of the CLA. Two criticisms relate to the validity of the instrument
Two other criticisms relate to the normative implications of the CLA
Voluntary System of Assessment—an initiative developed by American Association of State Colleges and Universities (AASCU) and the National Association of State Universities and Land-Grant Colleges (NASULGC) for 4-year public colleges and universities. The VSA endorses the use of the CLA for reporting out student learning outcomes through the College Portrait.
In a move that could alter the working dynamic between Harvard and many of its teaching fellows and Ph.D. seekers, a group of graduate students has begun an effort to unionize, according to members of the movement.
Aaron T. Bekemeyer and Elaine F. Stranahan, graduate students involved in the unionization effort, said Friday that the movement is still in its early stages, but added that it counts members from all three divisions of the Graduate School of Arts and Sciences. Bekemeyer and Stranahan did not share the number of participants with The Crimson.
The Harvard graduate students join peers at Yale in organizing to unionize. If the movement is successful, it could change the way graduate students interact with the University, according to members who envisioned a more centralized complaint system and suggested that a union could empower graduate students in negotiations with Harvard.
“We are interested in making the University a better place for grad students to do the best research and teaching that they can do," Bekemeyer said.
Stranahan said she hopes Harvard and its graduate students can see eye-to-eye.
“We are hopeful that the University will be fully cooperative with us, that they share the exact same interests that we are advocating for, that what is their best interest is in our interest, that we are on the same side, really," she said.
Legal precedent, however, has gone against similar efforts at other schools. While last year the National Labor Relations Board ruled that football players at Northwestern held the legal status of employees at the university and could form a union, a 2004 ruling classified graduate students at Brown as non-employees, meaning that federal union protections would not apply to them. This ruling, according to Rutgers professor Paula B. Voos, means that private schools still hold the upper hand with graduate union efforts.
“[Graduate students] cannot make the employers [recognize a union] even if they have an election that wins the majority vote saying that they wish to unionize," Voos said.
Citing that precedent, Columbia has not recognized a union that graduate students voted to form in 2014. Those students have since brought their case to the NLRB, which has not yet announced a decision.
When New York University teaching and research assistants unionized in 2013, they were the only union of graduate assistants recognized by a private university in the U.S., according to The New York Times. Many administrations at other private universities where there have been unionization movements have opposed it.
Members of the Harvard movement are watching other unionization movements closely, according to Stranahan.
“We are aware of and following some of the unionization campaigns among graduate students and other schools," she said. “We have yet to see what that larger national scene and the legal circumstances that result and how that will affect what our path will end up being."
Ann Hall, a GSAS spokesperson, declined to comment on the school’s views of the labor movement. Anna Cowenhoven, a spokesperson for the Faculty of Arts and Sciences, similarly declined.
Stranahan said the unionization movement is focusing on a number of issues, including health care including dental care, housing and family concerns, and the experience of teaching fellows.
Bekemeyer drew a distinction between the Harvard Teaching Campaign, a movement focused on capping section sizes in undergraduate courses, and the unionization group, but said the entities might work toward complementary goals.
“The Teaching Campaign is about a pedagogical issue about providing a better opportunity for undergraduates in these classes. But I also think it is a platform to raise these other issues about what teaching is like for TFs, and it won’t solve all of them directly, but it will hopefully provide momentum to address these other issues," he said. “These are the exact sort of issues that [a] union would help with."
—Staff writer Jill E. Steinman can be reached at [email protected] Follow her on Twitter @jillsteinman.
Reader comments are listed below. Comments are currently closed and new comments are no longer being accepted.
Sort:
I work for a large American multinational, where I am expected to work 12-15 hour days, and some weekends. If senior management adopted more humane demands of their employees (9 to 6), they would have to hire more people to get the job done. Thousands more would be employed, and I could have dinner with my family once in a while (which I would happily take a pay cut for).
Technology was supposed to grant us the rest that our grandparents' generation couldn't afford. Instead, we find ourselves staying in the office longer, and still answering emails from home.
The only people actually benefiting from this system are the guys at the top who only pay a single employee's salary to get the work of multiple people done.
The growth in "Services" employment masks the fact that most of that is health care and education, both of which are in a terrible crisis of multiple-of-inflation unaffordable costs. That growth cannot continue indefinitely, and is already showing signs of ending.
What will replace them? Nothing is evident. This year we read of automation replacing humans in controlling anesthesia during surgery and Watson killing it in medical diagnosis.
It's time to start thinking of automation/robots as "workers", as they increasingly compete with wetware. Does anyone think that we'll be able to raise the productivity of the latter as quickly as the former? If we don't, what jobs will be safe from non-striking, never-sick, never-sleeping, no-kids-soccer-match robots?
Stuff like, e.g., raising the minimum wage, in this new era of barista-free automated Starbucks (yes) is truly a last gasp. We are going to have to confront the fact that an ever-higher fraction of workers will be unable to support their families on the amount they can earn, if they can get work at all. While better monetary policy can likely get the developed world back to full employment short term, it won't affect this ultimate issue.
Our parents had careers. We had jobs. Our kids have gigs, if they're lucky. What a world!
At some point in the distant future, robots will do all the work - if we are not robots ourselves by that time.
Then, it is a question of time when a large part of humanity will not have jobs, and we might as well start thinking about how we can handle that now.
To me the gospel that technology magically always will create about the same amount of new jobs that it displaces it just a delusion.
We will have to change model.
Great point. I was brought up in the former Soviet Union and witnessed it's demise. In 1993, I came to the US and lived there for 7 years. In 2000 I moved to Canada.
Three very different societies, and different values.
My personal observation is that Canadians have the best values as people. They are most law abiding, responsible, family oriented, environment protecting, caring and patriotic of all three societies that I've lived in.
IQs may be higher in Russia of all three, but without societal values, it only spells trouble. For example, Russia produces the highest percentage of computer hackers - high IQ, poor values group.
I also agree that central control will fail fast - I lived it with the USSR failing, that's exactly what it was.
Values and education must work together. That's the only way.
Thanks for raising this great point.
Can't believe it, Karl Marx was actually right. What we are seeing here is the means of production are reaching such a high level that they are in conflict with the relations of production. Or said on other words, our technology is becoming incompatible with Capitalism. Like it or not I don't see other way out than a revolution.
Gains in productivity, information and mechanization for the past century has gifted the Youth with large empty days liberated from hard work, dangerous occupations, and food insecurity.
However the time saved, is wasted on playing kick-ass video games, shopping mall materialism, social media sharing, and porn.
Video games, computers, tv and smart phones are on all the time.
Productivity gives Time, but Time is not always well spent.
I see lots of good points in the comments. My problem is that this article is far too optomistic in general. Unemployment will continue to rise, and it is basically telling half or more of the population that they have no place in the 'Brave New World', except perhaps as like a patient in an asylum. A Government ward.
A world in which the only free man is the one at the top 1%, who's capitol, or ancestor's capitol, bought him a large share of this new wave of automation.
Fear of this and other problems has led a lot of folks out there to hope for some even larger kind of machine revolution that they call 'Singularity'. This is clearly a new religion forming before our eyes.
Even if the Luddites were wrong, they were violent in the past because they saw the rest of the world tell them 'go die in a ditch, you're no longer needed.'.
I think we need to start tax the wealthy accordingly so that we can hire more people in healthcare and education.
Instead US government gives tax breaks to super rich while cutting spending in education/healthcare.
Average IQ of 80% of Americans is 85-95
Average IQ needed for New Economy Jobs 120
-
No amount of education will turn a truck driver into a Googlesque programmer. Also generational IQs keep decreasing. An 89 IQ might have been adequate for the industrial revolution but isn't for the information revolution.
-
Basically capitalism has hit a wall with IQ levels, the capitalistic foundations of increasing specialization will no longer work in the new economy they are limited by skill and IQ levels.
-
The only real longterm answer is how do we stop the falling generational IQ levels and instead increase them (No education is not the sole answer, as college educated Barristas can attest to).
The answer is to solve the tough societal/cultural problems. I can only see two answers, increased centralized control or increased focus on individual values and responsibilities in our social contracts.
_
I do not favor more central control and legalism, as these usually fail quickly (Qin Dynasty/legalism) but rather as Confucius and all other great philosophers realized a society based on values and individual responsibilities, duty to their social contracts are more resilient, efficient, long lasting, free and allow the greatest happiness.
Finally, a major publication that is willing to tell it how it is. I can't tell you how many people have been totally lying about this because they fear what telling the truth would result in. Thanks for talking about it so much.
The social movements that seem to strengthen when these periods of labor displacement happen are a completely natural part of capitalism's inevitable transition into socialism. As more and more workers are displaced by automation they must petition government for their grievances out of survival necessity and being unemployed they have plenty of time to do that. Once in control of government they tend to introduce more egalitarian policies, which are basically "socialist" in nature. Over time these changes add up as a growing pool of the unemployed (or poorly paid employed) gradually make the government expand into taking care of their welfare.
So when you see these periods of major technological revolution that upset the labor status quo, they tend to be followed by major social revolutions too which help level the playing field again and result in an expansion of a welfare state.
This article reflects one reason why I no longer read much of your material. While the issue is real, your blithe dimissal of the possiblity of addressing the growing wealth disparity isn't even close to convincing and puts you in league with the ideologues of the right. Let's be creative: we can do something about it, and I'm not just talking about coercive taxation.
Its clearly time to start having a mature discussion about Citizen's Basic Income.
If few can buy; how much needs to be produced? Ford produced a car his workers could buy and he is reported to have actually given thought to this. More 360 degree thinking is needed.
Imagine that a system replaced the real estate agent (seems to be a high probability according to the table provided). This means that a group of engineers must have sat down and created a piece of software that replaced real estate agent’s job. As a result, these engineers know this system and the business process in their entirety. As real estate market evolves and new business opportunities or models become available (which is even more likely with the flexibility of software), these software engineers will continue to evolve their software. This evolution will never stop – software engineers will continue to evolve the new real estate market.
Looking at it from another perspective, these engineers ARE the new real estate agents. They run the market, control it’s course. Granted, there are a lot less of them, and the scale is very different, but the expertise is not lost, it simply changes hands.
If we remind ourselves that software code is written in a language, and think of software engineers simply as people that speak that language, then we can simply say that people with deep business knowledge in a certain industry that can “code" that business process using software language are the new business people. These guys will continuously be needed in the labour market and that market will grow.
We should start thinking of software coding language as one that has to be added to mandatory school curriculum from grade 1. Countries that see the need and adopt this will win in the long run. I am getting my kids to learn it and think of it as their second language, because I need them to be prepared for the future.
I find this article to be short-sighted. It basically saying that in the long term the future will be fine because technology will produce more jobs in the future. The only basis of that belief is that it happened this way in the past.
Simply because something one way in the past, it does need to happen so again. The author forgets than in the past there was still a whole lot of room for improvement, and there fore new industries were constantly created - creating new job.
Nevertheless, the pace in which industries are cleared (that are highly labor-intensive) has pretty much stopped completely. The new industries, like video games or social networking etc., are NOT labor intensive at all (i.e. does not employ many people).
If was the author, I would be more concerned about the present facts, that there are not enough jobs, and the ones that exists - competition has driven wages down, instead of feeling secured in an imaginary fairy-tale future.
"It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers (see article). Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.
". . . a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers" grab a growing share of national income and accumulate an increasing concentration of national wealth."
in order to make this money they will need people to sell their products to.
This could happen if more people worked fewer hours and there were some form of income distribution.
Having more people work fewer hours would work more easily if benefits were not tied to employment. The principle benefits in the US coupled to employment are health care and retirement. Already part time work is popular in the US because it effectively does this decoupling.
Sharing the wealth of the few to subsidize wages of the many as well as investing their retirement in new wealth creation would help to mitigate income inequality,allow the majority to own a piece of future progress, and maintain markets in which to sell future benefits.
Creative genius could still be richly but not obscenely rewarded.
IN 1930, when the world was “suffering…from a bad attack of economic pessimism", John Maynard Keynes wrote a broadly optimistic essay, “Economic Possibilities for our Grandchildren". It imagined a middle way between revolution and stagnation that would leave the said grandchildren a great deal richer than their grandparents. But the path was not without dangers.
One of the worries Keynes admitted was a “new disease": “technological unemployment…due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour." His readers might not have heard of the problem, he suggested—but they were certain to hear a lot more about it in the years to come.
For the most part, they did not. Nowadays, the majority of economists confidently wave such worries away. By raising productivity, they argue, any automation which economises on the use of labour will increase incomes. That will generate demand for new products and services, which will in turn create new jobs for displaced workers. To think otherwise has meant being tarred a Luddite—the name taken by 19th-century textile workers who smashed the machines taking their jobs.
For much of the 20th century, those arguing that technology brought ever more jobs and prosperity looked to have the better of the debate. Real incomes in Britain scarcely doubled between the beginning of the common era and 1570. They then tripled from 1570 to 1875. And they more than tripled from 1875 to 1975. Industrialisation did not end up eliminating the need for human workers. On the contrary, it created employment opportunities sufficient to soak up the 20th century’s exploding population. Keynes’s vision of everyone in the 2030s being a lot richer is largely achieved. His belief they would work just 15 hours or so a week has not come to pass.
When the sleeper wakes
Yet some now fear that a new era of automation enabled by ever more powerful and capable computers could work out differently. They start from the observation that, across the rich world, all is far from well in the world of work. The essence of what they see as a work crisis is that in rich countries the wages of the typical worker, adjusted for cost of living, are stagnant. In America the real wage has hardly budged over the past four decades. Even in places like Britain and Germany, where employment is touching new highs, wages have been flat for a decade. Recent research suggests that this is because substituting capital for labour through automation is increasingly attractive; as a result owners of capital have captured ever more of the world’s income since the 1980s, while the share going to labour has fallen.
At the same time, even in relatively egalitarian places like Sweden, inequality among the employed has risen sharply, with the share going to the highest earners soaring. For those not in the elite, argues David Graeber, an anthropologist at the London School of Economics, much of modern labour consists of stultifying “bullshit jobs"—low- and mid-level screen-sitting that serves simply to occupy workers for whom the economy no longer has much use. Keeping them employed, Mr Graeber argues, is not an economic choice; it is something the ruling class does to keep control over the lives of others.
Be that as it may, drudgery may soon enough give way to frank unemployment. There is already a long-term trend towards lower levels of employment in some rich countries. The proportion of American adults participating in the labour force recently hit its lowest level since 1978, and although some of that is due to the effects of ageing, some is not. In a recent speech that was modelled in part on Keynes’s “Possibilities", Larry Summers, a former American treasury secretary, looked at employment trends among American men between 25 and 54. In the 1960s only one in 20 of those men was not working. According to Mr Summers’s extrapolations, in ten years the number could be one in seven.
This is one indication, Mr Summers says, that technical change is increasingly taking the form of “capital that effectively substitutes for labour". There may be a lot more for such capital to do in the near future. A 2013 paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, argued that jobs are at high risk of being automated in 47% of the occupational categories into which work is customarily sorted. That includes accountancy, legal work, technical writing and a lot of other white-collar occupations.
Answering the question of whether such automation could lead to prolonged pain for workers means taking a close look at past experience, theory and technological trends. The picture suggested by this evidence is a complex one. It is also more worrying than many economists and politicians have been prepared to admit.
The lathe of heaven
Economists take the relationship between innovation and higher living standards for granted in part because they believe history justifies such a view. Industrialisation clearly led to enormous rises in incomes and living standards over the long run. Yet the road to riches was rockier than is often appreciated.
In 1500 an estimated 75% of the British labour force toiled in agriculture. By 1800 that figure had fallen to 35%. When the shift to manufacturing got under way during the 18th century it was overwhelmingly done at small scale, either within the home or in a small workshop; employment in a large factory was a rarity. By the end of the 19th century huge plants in massive industrial cities were the norm. The great shift was made possible by automation and steam engines.
Industrial firms combined human labour with big, expensive capital equipment. To maximise the output of that costly machinery, factory owners reorganised the processes of production. Workers were given one or a few repetitive tasks, often making components of finished products rather than whole pieces. Bosses imposed a tight schedule and strict worker discipline to keep up the productive pace. The Industrial Revolution was not simply a matter of replacing muscle with steam; it was a matter of reshaping jobs themselves into the sort of precisely defined components that steam-driven machinery needed—cogs in a factory system.
The way old jobs were done changed; new jobs were created. Joel Mokyr, an economic historian at Northwestern University in Illinois, argues that the more intricate machines, techniques and supply chains of the period all required careful tending. The workers who provided that care were well rewarded. As research by Lawrence Katz, of Harvard University, and Robert Margo, of Boston University, shows, employment in manufacturing “hollowed out". As employment grew for highly skilled workers and unskilled workers, craft workers lost out. This was the loss to which the Luddites, understandably if not effectively, took exception.
With the low-skilled workers far more numerous, at least to begin with, the lot of the average worker during the early part of this great industrial and social upheaval was not a happy one. As Mr Mokyr notes, “life did not improve all that much between 1750 and 1850." For 60 years, from 1770 to 1830, growth in British wages, adjusted for inflation, was imperceptible because productivity growth was restricted to a few industries. Not until the late 19th century, when the gains had spread across the whole economy, did wages at last perform in line with productivity (see chart 1).
Along with social reforms and new political movements that gave voice to the workers, this faster wage growth helped spread the benefits of industrialisation across wider segments of the population. New investments in education provided a supply of workers for the more skilled jobs that were by then being created in ever greater numbers. This shift continued into the 20th century as post-secondary education became increasingly common.
Claudia Goldin, an economist at Harvard University, and Mr Katz have written that workers were in a “race between education and technology" during this period, and for the most part they won. Even so, it was not until the “golden age" after the second world war that workers in the rich world secured real prosperity, and a large, property-owning middle class came to dominate politics. At the same time communism, a legacy of industrialisation’s harsh early era, kept hundreds of millions of people around the world in poverty, and the effects of the imperialism driven by European industrialisation continued to be felt by billions.
The impacts of technological change take their time appearing. They also vary hugely from industry to industry. Although in many simple economic models technology pairs neatly with capital and labour to produce output, in practice technological changes do not affect all workers the same way. Some find that their skills are complementary to new technologies. Others find themselves out of work.
Take computers. In the early 20th century a “computer" was a worker, or a room of workers, doing mathematical calculations by hand, often with the end point of one person’s work the starting point for the next. The development of mechanical and electronic computing rendered these arrangements obsolete. But in time it greatly increased the productivity of those who used the new computers in their work.
Many other technical innovations had similar effects. New machinery displaced handicraft producers across numerous industries, from textiles to metalworking. At the same time it enabled vastly more output per person than craft producers could ever manage.
Player piano
For a task to be replaced by a machine, it helps a great deal if, like the work of human computers, it is already highly routine. Hence the demise of production-line jobs and some sorts of book-keeping, lost to the robot and the spreadsheet. Meanwhile work less easily broken down into a series of stereotyped tasks—whether rewarding, as the management of other workers and the teaching of toddlers can be, or more of a grind, like tidying and cleaning messy work places—has grown as a share of total employment.
But the “race" aspect of technological change means that such workers cannot rest on their pay packets. Firms are constantly experimenting with new technologies and production processes. Experimentation with different techniques and business models requires flexibility, which is one critical advantage of a human worker. Yet over time, as best practices are worked out and then codified, it becomes easier to break production down into routine components, then automate those components as technology allows.
If, that is, automation makes sense. As David Autor, an economist at the Massachusetts Institute of Technology (MIT), points out in a 2013 paper, the mere fact that a job can be automated does not mean that it will be; relative costs also matter. When Nissan produces cars in Japan, he notes, it relies heavily on robots. At plants in India, by contrast, the firm relies more heavily on cheap local labour.
Even when machine capabilities are rapidly improving, it can make sense instead to seek out ever cheaper supplies of increasingly skilled labour. Thus since the 1980s (a time when, in America, the trend towards post-secondary education levelled off) workers there and elsewhere have found themselves facing increased competition from both machines and cheap emerging-market workers.
Such processes have steadily and relentlessly squeezed labour out of the manufacturing sector in most rich economies. The share of American employment in manufacturing has declined sharply since the 1950s, from almost 30% to less than 10%. At the same time, jobs in services soared, from less than 50% of employment to almost 70% (see chart 2). It was inevitable, therefore, that firms would start to apply the same experimentation and reorganisation to service industries.
A new wave of technological progress may dramatically accelerate this automation of brain-work. Evidence is mounting that rapid technological progress, which accounted for the long era of rapid productivity growth from the 19th century to the 1970s, is back. The sort of advances that allow people to put in their pocket a computer that is not only more powerful than any in the world 20 years ago, but also has far better software and far greater access to useful data, as well as to other people and machines, have implications for all sorts of work.
The case for a highly disruptive period of economic growth is made by Erik Brynjolfsson and Andrew McAfee, professors at MIT, in “The Second Machine Age", a book to be published later this month. Like the first great era of industrialisation, they argue, it should deliver enormous benefits—but not without a period of disorienting and uncomfortable change. Their argument rests on an underappreciated aspect of the exponential growth in chip processing speed, memory capacity and other computer metrics: that the amount of progress computers will make in the next few years is always equal to the progress they have made since the very beginning. Mr Brynjolfsson and Mr McAfee reckon that the main bottleneck on innovation is the time it takes society to sort through the many combinations and permutations of new technologies and business models.
A startling progression of inventions seems to bear their thesis out. Ten years ago technologically minded economists pointed to driving cars in traffic as the sort of human accomplishment that computers were highly unlikely to master. Now Google cars are rolling round California driver-free no one doubts such mastery is possible, though the speed at which fully self-driving cars will come to market remains hard to guess.
Brave new world
Even after computers beat grandmasters at chess (once thought highly unlikely), nobody thought they could take on people at free-form games played in natural language. Then Watson, a pattern-recognising supercomputer developed by IBM, bested the best human competitors in America’s popular and syntactically tricksy general-knowledge quiz show “Jeopardy!" Versions of Watson are being marketed to firms across a range of industries to help with all sorts of pattern-recognition problems. Its acumen will grow, and its costs fall, as firms learn to harness its abilities.
The machines are not just cleverer, they also have access to far more data. The combination of big data and smart machines will take over some occupations wholesale; in others it will allow firms to do more with fewer workers. Text-mining programs will displace professional jobs in legal services. Biopsies will be analysed more efficiently by image-processing software than lab technicians. Accountants may follow travel agents and tellers into the unemployment line as tax software improves. Machines are already turning basic sports results and financial data into good-enough news stories.
Jobs that are not easily automated may still be transformed. New data-processing technology could break “cognitive" jobs down into smaller and smaller tasks. As well as opening the way to eventual automation this could reduce the satisfaction from such work, just as the satisfaction of making things was reduced by deskilling and interchangeable parts in the 19th century. If such jobs persist, they may engage Mr Graeber’s “bullshit" detector.
Being newly able to do brain work will not stop computers from doing ever more formerly manual labour; it will make them better at it. The designers of the latest generation of industrial robots talk about their creations as helping workers rather than replacing them; but there is little doubt that the technology will be able to do a bit of both—probably more than a bit. A taxi driver will be a rarity in many places by the 2030s or 2040s. That sounds like bad news for journalists who rely on that most reliable source of local knowledge and prejudice—but will there be many journalists left to care? Will there be airline pilots? Or traffic cops? Or soldiers?
There will still be jobs. Even Mr Frey and Mr Osborne, whose research speaks of 47% of job categories being open to automation within two decades, accept that some jobs—especially those currently associated with high levels of education and high wages—will survive (see table). Tyler Cowen, an economist at George Mason University and a much-read blogger, writes in his most recent book, “Average is Over", that rich economies seem to be bifurcating into a small group of workers with skills highly complementary with machine intelligence, for whom he has high hopes, and the rest, for whom not so much.
And although Mr Brynjolfsson and Mr McAfee rightly point out that developing the business models which make the best use of new technologies will involve trial and error and human flexibility, it is also the case that the second machine age will make such trial and error easier. It will be shockingly easy to launch a startup, bring a new product to market and sell to billions of global consumers (see article). Those who create or invest in blockbuster ideas may earn unprecedented returns as a result.
In a forthcoming book Thomas Piketty, an economist at the Paris School of Economics, argues along similar lines that America may be pioneering a hyper-unequal economic model in which a top 1% of capital-owners and “supermanagers" grab a growing share of national income and accumulate an increasing concentration of national wealth. The rise of the middle-class—a 20th-century innovation—was a hugely important political and social development across the world. The squeezing out of that class could generate a more antagonistic, unstable and potentially dangerous politics.
The potential for dramatic change is clear. A future of widespread technological unemployment is harder for many to accept. Every great period of innovation has produced its share of labour-market doomsayers, but technological progress has never previously failed to generate new employment opportunities.
The productivity gains from future automation will be real, even if they mostly accrue to the owners of the machines. Some will be spent on goods and services—golf instructors, household help and so on—and most of the rest invested in firms that are seeking to expand and presumably hire more labour. Though inequality could soar in such a world, unemployment would not necessarily spike. The current doldrum in wages may, like that of the early industrial era, be a temporary matter, with the good times about to roll (see chart 3).
These jobs may look distinctly different from those they replace. Just as past mechanisation freed, or forced, workers into jobs requiring more cognitive dexterity, leaps in machine intelligence could create space for people to specialise in more emotive occupations, as yet unsuited to machines: a world of artists and therapists, love counsellors and yoga instructors.
Such emotional and relational work could be as critical to the future as metal-bashing was in the past, even if it gets little respect at first. Cultural norms change slowly. Manufacturing jobs are still often treated as “better"—in some vague, non-pecuniary way—than paper-pushing is. To some 18th-century observers, working in the fields was inherently more noble than making gewgaws.
But though growth in areas of the economy that are not easily automated provides jobs, it does not necessarily help real wages. Mr Summers points out that prices of things-made-of-widgets have fallen remarkably in past decades; America’s Bureau of Labour Statistics reckons that today you could get the equivalent of an early 1980s television for a twentieth of its then price, were it not that no televisions that poor are still made. However, prices of things not made of widgets, most notably college education and health care, have shot up. If people lived on widgets alone— goods whose costs have fallen because of both globalisation and technology—there would have been no pause in the increase of real wages. It is the increase in the prices of stuff that isn’t mechanised (whose supply is often under the control of the state and perhaps subject to fundamental scarcity) that means a pay packet goes no further than it used to.
So technological progress squeezes some incomes in the short term before making everyone richer in the long term, and can drive up the costs of some things even more than it eventually increases earnings. As innovation continues, automation may bring down costs in some of those stubborn areas as well, though those dominated by scarcity—such as houses in desirable places—are likely to resist the trend, as may those where the state keeps market forces at bay. But if innovation does make health care or higher education cheaper, it will probably be at the cost of more jobs, and give rise to yet more concentration of income.
The machine stops
Even if the long-term outlook is rosy, with the potential for greater wealth and lots of new jobs, it does not mean that policymakers should simply sit on their hands in the mean time. Adaptation to past waves of progress rested on political and policy responses. The most obvious are the massive improvements in educational attainment brought on first by the institution of universal secondary education and then by the rise of university attendance. Policies aimed at similar gains would now seem to be in order. But as Mr Cowen has pointed out, the gains of the 19th and 20th centuries will be hard to duplicate.
Boosting the skills and earning power of the children of 19th-century farmers and labourers took little more than offering schools where they could learn to read, write and do algebra. Pushing a large proportion of college graduates to complete graduate work successfully will be harder and more expensive. Perhaps cheap and innovative online education will indeed make new attainment possible. But as Mr Cowen notes, such programmes may tend to deliver big gains only for the most conscientious students.
Another way in which previous adaptation is not necessarily a good guide to future employment is the existence of welfare. The alternative to joining the 19th-century industrial proletariat was malnourished deprivation. Today, because of measures introduced in response to, and to some extent on the proceeds of, industrialisation, people in the developed world are provided with unemployment benefits, disability allowances and other forms of welfare. They are also much more likely than a bygone peasant to have savings. This means that the “reservation wage"—the wage below which a worker will not accept a job—is now high in historical terms. If governments refuse to allow jobless workers to fall too far below the average standard of living, then this reservation wage will rise steadily, and ever more workers may find work unattractive. And the higher it rises, the greater the incentive to invest in capital that replaces labour.
Everyone should be able to benefit from productivity gains—in that, Keynes was united with his successors. His worry about technological unemployment was mainly a worry about a “temporary phase of maladjustment" as society and the economy adjusted to ever greater levels of productivity. So it could well prove. However, society may find itself sorely tested if, as seems possible, growth and innovation deliver handsome gains to the skilled, while the rest cling to dwindling employment opportunities at stagnant wages.
The Internet is one of humanity’s greatest technical advances. Yet compared to great technological inventions of the past, it is also a colossal economic disappointment.
I’m talking about jobs.
Yes, young programmers are getting jobs straight out of college at salaries in the six figures. But I’m referring to jobs in a deep and sustaining sense — employment well beyond the “1 percent."
For all its economic virtues, the Internet has been long on job displacement and short on job creation. As a result, it is playing a central role in wage stagnation and the decline of the middle class.
Sure, the Internet has created new applications and great companies — Google, Facebook, Amazon, Twitter, and the all-important cloud. But many of the largest Internet companies have for the most part taken revenue from existing companies without growing the total economy.
The technologies of the past had massive new job creation effects that swamped displacement effects. The Internet on the other hand has massive displacement effects that are overwhelming the job creation effects. In the past, new technological achievements created new industries that not only absorbed the displaced workers but generated opportunities for many more. The result was a vibrant middle class.
Consider the integrated circuit, which first appeared on the market in 1961. At that time, the worldwide electronics market was $29 billion. Today it is on the order of $1.5 trillion. The integrated circuit made existing products better. For example, vacuum tube mainframe computers were replaced by computers based on integrated circuits. The new machines were less expensive, far faster, more reliable, substantially smaller, and much more energy efficient. As a result the mainframe computer business expanded rapidly. IBM’s revenue increased from less than $2 billion in 1960 to over $26 billion in 1980. The integrated circuit also spawned new industries and applications that never existed before — cellular communications, PCs, tablets, and the Internet of Things.
The story of the internal combustion engine is even more dramatic. Not only did it create the automotive industry, but Henry Ford shocked the industrial world when he doubled the pay of assembly line workers to $5 a day. Ford reasoned that a higher paid workforce would be able to buy more cars and thus would grow his business. Others followed suit. Ford’s action helped to create the middle class.
Automotive companies also created a large demand for other products and services that employed millions more — steel, coal to make the steel, glass, machine tools, auto dealers and dealerships, gas stations, oil fields, mechanics, bridges, roads, construction equipment, etc. Automobiles created suburbia and the home construction boom that followed. They made a new form of retail distribution possible—the shopping center. The workers in new jobs purchased homes, appliances, and clothes creating still more jobs. During the 20th Century, the industrialized world enjoyed the fruits of what economists call the virtuous circle.
To date the Internet has been much more effective at eliminating jobs than creating new ones. Exhibit A: Online retailing has directly replaced many jobs and indirectly eliminated many more. Amazon’s extremely efficient distribution system replaces retail stores and their employees. Their warehouses use robots instead of workers.
Those are the direct effects. The indirect effects are the disappearing need for retail space, along with workers who build the stores and maintain them, as well as companies that supply retail establishments with furnishings.
The Internet has made shopping more efficient and created more competition that has driven down consumer prices. But it has had little or no effect on per capita sales. Monthly retail sales adjusted for both inflation and population growth are below where they were prior to the 2008 recession — $165 versus $168 billion — and have increased by less than 10% in the last 15 years or about 0.6% per year. Meanwhile, employment in the retail and wholesale trade has dropped from about 21.2 million in 2000 to 19.9 million in 2010.
Those highly paid young coders are a select few. They are also a symptom of something more insidious: The Internet is so efficient that it can create large income companies with few employees.
The reason Google, Facebook, and Twitter can pay them such large salaries is that the Internet companies is so efficient they can generate high revenues with few employees.
In 2013, Google had around 50,000 employees and generated revenues of around $55 billion in sales or about $1.0 million per employee. The numbers are similar for Facebook. Amazon was running at an $74 billion revenue rate and had around 110,000 employees or a little over $670,000 in sales per employee.
In the United States, each non-farm worker adds a little over $120,000 to the domestic output. That means that highly productive Internet companies must create five to ten times the dollars in sales to justify hiring an employee as the average company of the past did.
The prevailing economic wisdom is that new technologies will create new opportunities that will offset the effects of displacement. We continually use the experiences of the past to support our hopes about the future. But the experiences of the past took place in the physical world. Our future will be increasingly played out in a virtual one.
Given that the Internet isn’t turning out to be the job creation engine of the future we all hoped for, we had better get to work on searching for and implementing policies that will offset the Internet’s displacement effects.
To start with those policies must be implemented with the Internet’s efficiency in mind. Raising the minimum wage, for instance, plays straight into the hands of the Internet efficiency engine. Raising the minimum wage will just drive employers to use machines to replace people. An earned income tax credit is a better approach. Low paid workers get the benefit of transfer payments and employers who will not pay hirer wages will feel less pressure to automate.
Investing in infrastructure is an excellent way to create jobs but such infrastructure should be compatible with an increasingly virtual world. Yes we should fix the roads but as more and more people work from home, as more and more of what we purchase gets delivered to our doorstep, as more and more of us go out to the movies in our living rooms, and as highway congestion grows, the chances are that more and more of us will use our cars less.
Millennials are the harbingers of this new trend. The numbers of cars purchased by people 18 to 34 years old has fallen by almost 30%. Millennials are opting to spend their money on high tech things like tablets, smart phones, and high bandwidth access.
For a millennial, the infrastructure of the future will be higher bandwidth interconnections and public transportation that will take the place of his car.
Actions like these will chip away at the problem. The challenge will be to find enough to them to offset the effects of the most powerful efficiency engine the world has ever known.
In America, we take a lot of pride in the idea that every person deserves an equal shot at making the most of their talents. Over the years, I’ve learned that equal opportunity relies on equal access to all kinds of things—including the chance to complete some kind of degree after high school.
The United States is actually pretty good at getting students into higher-ed programs. But our completion rates are shockingly low. The United States has the highest dropout rate of any developed country for kids who start a higher-ed program. And that has a huge impact on their ability to build a career and earn a good living.
Fortunately, there are things we can do to turn the situation around. I recently had a chance to speak about the problem and the opportunities to solve it at a meeting of business officers from colleges and universities around the country. I talked with them about what I hope they will do to ensure that students complete a college degree that they can build a career with. Here is the speech I gave.
Remarks as delivered
National Association of College and University Business Officers Annual Meeting
BILL GATES:
Well, good morning. I’m excited to be here. I appreciate this chance to talk with you about the future of one of America’s greatest assets. Our higher education system. A lot of people don’t know that our Foundation invests in higher education. They may know more about our work in global health and development. Or, may even more about our work in the K-12 system in the United States. But, in fact, we have a significant program in this area because of its incredible importance.
And, our motivation for all of our work, to eradicate disease, to alleviate poverty, the health of education here in our country, all stems from a single core principle, which is that all lives have equal value.
The United States really stands for the proposition of equal opportunity. And, we’re striving in our work to have the US maintain and strengthen that, where access to great education is the key element. And, when we ask about the strength of our country in the decades to come, renewing this strength, helping it stay on top, I’d say, is one of the most important things that we need to do.
Looking at the individual level of opportunity, do people have equal opportunity? The data we see shows that, unless you’re given the preparation and access to higher education, and unless you have a successful completion of that higher education, your economic opportunity is greatly, greatly reduced. There’s a lot of data recently talking about the premium in salaries for people with four-year college degrees. In 2013, people with four-year college degrees earned 98 percent more per hour, on average, than people without degrees. That differential has gone up a lot. A generation ago, it was only 64 percent.
If you look at the numbers more closely, you will also see that unemployment, partial employment, is primarily in people without four-year degrees. Our economy already is near full employment for people with full year degrees. And, so, the uncertainty, the difficulty, the challenges, faced, if you haven’t been able to get a higher education degree, are very difficult already today. And, with changes coming in the economy, with more automation, more globalization, that divide will become even more stark in the years ahead.
So, if we’re really serious about all lives having equal value, we need to make sure that the higher education system, both access, completion, and excellence, are getting the attention they need.
It is unfortunate that, although the US does quite well in the percentage of kids going into higher education, we’ve actually dropped, quite dramatically, in the percentage who complete higher education. We have, amongst developed countries, the highest dropout rate of kids who start. And, understanding why that happens is very, very important. For many of those kids, that experience is not only financially debilitating, being left with loans that are hard to pay off, but, also, psychologically, very debilitating, that they expected to complete, they tried to complete. And, whether it was math or getting the right courses, or the scheduling, somehow, they weren’t able to do that, which is a huge setback.
So, we had to deliver value. We’ve got to measure that value. And, really, adjusting the resources, so that we’re doing that well, is a mission for you, the business officers of the colleges and universities. You’re the ones charged with fiscal management. And, that has huge impact on every aspect of the student’s experience. The quality of instruction, the ability of financial aid, the physical plant, the support systems, all of that are tradeoffs that the financial model drives.
And, what my key message is today is that that model will be under challenge. And, so, instead of tuning it to find 3% here or 4% there, which has been the story in the past, there would be dramatic changes, in terms of deciding what scale you can operate, and what kinds of students you’re going to go after, and do you price differentiate, do you change your cost structure. And, so, the role of the business officer won’t be just finding that last little tuning or getting the reports done, it will be to get in the center of the strategy, working with the education leaders and the effectiveness measures, and figuring out how those goals and the financial numbers come together.
Everything that counts requires resources. Scholarships for low-income students, student supports, and I’ll talk about how critical that is, libraries, labs, things that attract good professors. And, balancing those things to deliver value, and, measuring that value will be more challenging.
And, we think about revenue sources for higher education, we can see that a number of them are challenged. A number of them will not grow as costs go up. At the state level, total state funding is not going up substantially. And, a higher portion of state budgets, over time, are going to health and retiree type areas. Health costs hit the states in many different ways. Current employees, retired employees, their Medicare programs. And, those costs go up substantially faster than the rate of inflation or the rate of state revenues. And, so, the second biggest pot of money, which is the education pot, both K-12 and higher ed, gets raided. And, so, on a per student basis, that money has gone down, and there’s no likely prospect that it will go back up. Some people have thought of it as cyclical, but, in fact, if you look at the last several cycles, it goes down in the cycle and then, during the good years, it stays at that level, and then, as the next cycle is hit, it’s gone down again.
Federal funding, you know, the Pell grant program, other federal loan programs, expanded dramatically over the last decade. And, now, there’s not, actually, enough funding to have those increased. Particularly, the Pell grant, where they’re actually behind and they’ve got to make up, just to stay at the current level, even ignoring educational cost inflation.
Tiering of students, where you make sure that the ones who can afford high tuitions are paying those, so that you can subsidize more, that’s been pushed, which is a fantastic thing. But, the amount that that can be done is now probably reaching a limit, where, even middle class, middle upper class, has a hard time with that top tuition rate. And, so revenues are going to be tough. It’s not as though we can raise student loan levels dramatically up from the level they are today.
And, those sources of revenue, as they look at outcomes, are probably going to be more demanding. They will be talking about measurements. And, although, in a certain sense, measurements can be a very, very good thing, this is a challenge that we have to get out in front of, because inappropriate measures can be worse than no measures at all. They can incent exactly the wrong behavior.
Now, in this fiscal environment, we also have innovation. Innovation in, of course, delivery, and innovation in how support systems can work. For course delivery, of course, we talk about massively online courses. And, how we can take the lecture component of an education and deliver that in a more flexible, higher quality form, over the Internet. And, the MOOC work really is just at the beginning. Those courses, there are thousands of them today. Some are pretty good. Most are mediocre. But, the competition for excellence between the MOOC’s is heating up very dramatically. So, you will see emerge, over the next five years, some fantastic courses for remedial math, remedial writing, statistics, all the entry level courses. Slowly, but surely, three or four will get more budgeting, more feedback, and more improvement. And, all the elements, the lecture element, the quiz element, the online collaboration elements of those, will improve very substantially. And, the net result of that will be that the lecture piece will no longer be competitive. And, so, the real question won’t be about lectures. It will be about how you take those online content pieces and combine it with study groups, labs, discussion sessions, to deal with the kind of motivation, the supports, that students need.
The MOOC, by itself, doesn’t really change things, except for the very most motivated student. It’s just an element to be mixed in to get all the steps to get through an entire degree program. And, so, most of these systems will be hybrid systems. After all, a student who could deal with just the MOOC by itself, without any face-to-face contact and counseling, they’re the type of student who, when we had text books, were also capable of getting by and learning the material. The MOOC is not based on new educational knowledge. It’s simply presented in an easier to understand, more interactive way that can be fantastic. So, that’s an opportunity.
Now, by taking that lecture piece and changing it and making it not a professor from the institution itself, but, rather, somebody like who is more like the text book, where they competed in a broad market to be the very best, it’s going to raise questions about what is the role of the professor. How do they fit into this new world?
Many institutions will use this capacity, some moving too fast, certainly. But, they will use it to increase their scale. And, so, at the same time as you all have constrained revenues, you will have people doing a good job or not doing a good job, who are offering to enroll a larger number of students. And, so, in a sense, there will be more competition. The affordability question, the innovation question, all of those lead to saying that, instead of supply and demand being in this balance, there will become an imbalance. And, some institutions will make progress into this new world and some will not.
We have seen some great reforms, some leaders taking best practices. We work with the National Center for Academic Transformation. And, there, course by course, in terms of particular areas like remedial math, the redesigns have been very dramatic, in terms of raising completion rates, reducing class time. And, that’s very exciting.
I’d say another piece that gets less attention but I’m equally excited about is student advising. By creating a digital system that knows everything about a student, including all their discussions with adults about scheduling and financing and goals, the challenges they face, and a system that tracks when have they been online, when have they been attending courses, and is analyzing when they might need some support intervention, and identifies what kind of intervention is best, somebody who can help with their financial package, or help with their motivation or scheduling, and having support resources that, by going to that digital record, have the entire context and can help them in the most efficient, least costly way.
So, those support systems will have to be very connected to administrative systems that will require investments, that will require thoughts about privacy, and, again, the job categories for the support people, specialization, abilities, these technical tools, will be extremely important.
Ironically, as we raise completion rates and we get more students into their third and fourth year, the cost structures of delivering courses in the third or fourth year is much higher. And, so, this idea of really understanding your cost by year, your cost by student type, that’s very important.
It is interesting to look at the for-profit sector. And, one can have a very simplistic view of the for profit sector. One can say, you know, maybe they’ve overmarketed in some cases. Maybe they’ve over-promised in some cases. You know, people are now looking at Corinthian, which is going out of business, having to sell off its campuses. Saying, okay, that sector had some bad practices and, so, maybe this is just.
But, in fact, if you look into the particulars of what data they were asked to provide, that’s a standard that they weren’t able to live up to. Their money was cut off. And, in fact, that same type of data, I believe, will be demanded of all institutions. And, the state really needs to think about is that realistic, how do they contribute? What are these longitudinal databases that look at student success from an economic point of view or achieving the goals that they had in mind?
I do think, as we look at the for-profit sector, there are a lot of best practices there. The support systems they’ve had, the student tracking. The way they use capital assets. The way they take a much tougher cohort of students, on average, than most institutions. And, so, while there are certain practices that are not good to adopt there, seeing the challenges they face and seeing the things the way they have done well, bringing that into all of education, will be important. I don’t believe that a measurement system should simply apply to them and not apply to the broader universe. And, getting those things right is very, very difficult.
So, it’s time for the higher ed community to develop and adhere to reporting standards that you shape that really get at these cost and performance issues. And, the sooner you drive this, it’s better than having it brought down from on high in a way that’s really not appropriate. I see no world in which these metrics are not coming along and are not more significant in shaping all the policies.
It is legitimate, for the state funders to the federal funders, to try and make sure that best practices are identified. We can look at similar institutions in this space and say, why are their cost structures different? Where is the money going? Is it that the one that’s spending less is missing value opportunity, or is it that the one that’s spending more is missing some efficiency opportunity? The time where questions could be put off has certainly come to an end. And, so, I think that’s an opportunity for all of you to rise to the occasion.
We need to make sure that we’re really talking about the goals and the outcomes. And, yes, it’s oversimplistic to say it’s just getting a job with a certain salary. There are things about citizenship, broad knowledge, and deep understanding of the world, that we should have in mind. And, those are very difficult to measure. But, simply saying that we won’t measure the other things, the job attainment, the completion rates, the salaries that people get. And, the satisfaction. What were their expectations coming in, and what happened to them, and why did they think they fell short or the institution fell short? That should be well understood.
There’s a certain irony that academic institutions are good at studying other parts of society, you know, bringing numbers and understanding to the work that they do. Looking at the health care system, looking at for-profit business. That’s a phenomenal role that the world of academics plays. But, I’d say, in terms of turning that lens on yourself and saying are certain degree programs subsidizing others, is that appropriate? Are there certain degree areas that some universities should get out of and specialize more, go for scale and things that you’re particularly strong in, those questions simply haven’t been asked, and this new environment will force them to be done.
Department by department, year by year, one thing we will have to be careful of in all of this is that we don’t want to get into cherry picking. That is, there could be a tendency, for example, if just looking at completion, to tell institutions to not take the more difficult students. Whereas, in fact, if we can appropriately measure that, we should reward colleges that take students that are tough and help them out.
Many of the measures today, for example, these ranking systems, work the opposite way. The basic idea is that, if you can admit geniuses that are fine the day they get in, that’s okay. Then, high SAT scores, high spending, that’s really good. So, it’s purely an input measure and not an output measure.
But, I do see this really as an opportunity. We can identify the best ways to deliver a postsecondary education, and we can share that. There are so many different decisions where there must be thousands of best practices that we can learn from. There will be a digital infrastructure. Not just for the courses, but for the analytics. All the things that are going on. And, so you, it’s going to be a challenge for you and an opportunity.
I know that university presidents and academic deans have often been reluctant to bring CFO’s and business officers into the strategic discussion. But, that reluctance reflects the fact that the number of moving pieces, the number of things that had to be reconsidered in the past, was very different than what they’ll face. And, so, it’s an environment where there will no longer be a perception of unlimited resources. And, your advice to presidents and deans will be critical.
So, I think this is very exciting. The role of education is more critical than ever. The role for equality, the role for our country, the role to lead the way, create the jobs of the future, using technology. It can all be done in an amazing way. It will be a period of turmoil and challenge, and I think you will rise to the occasion. We certainly need you to do so. Thank you.