Saturday, December 2, 2017

Do colleges and universities in America do more harm than good? Of course not!

I was shocked to learn that a substantial portion of American adults believe that colleges and universities do more harm than good. Really? What leads them to this conclusion? The web and talk radio are filled with people making such assertions (but offering no evidence). You will see and hear that: it costs too much to go to college; there’s no guarantee of a good job after graduation; student loans are destroying every student’s financial future; college faculty are brainwashing their students – biasing them against traditional American values, teaching them Marxist ideas and misleading them about what it takes to succeed in life; university administrators are claiming more and more tuition money for themselves, and amassing gigantic endowments; and there are an increasing number of useless majors and frivolous subjects being taught. Some of these same observers are convinced that most young people should become mechanics, plumbers, and welders, so they can live a good life without wasting time and money getting a college degree. Finally, according to these critics, colleges and universities are coddling students, encouraging them to cave in to political correctness and banning right-thinking speakers. If you read the Chronicle of Higher Education, a weekly newspaper produced by people who know something about what’s actually happening on campuses in the United States, academic are on the defensive -- obsessed with the most outlandish claims of their online critics. We see story after story about a very small number of high profile campus confrontations. Very little space, though, is devoted to detailed analyses of what is really being taught, the dramatic changes that have taken place in instructional methods (in most fields), the ways that universities are reconfiguring themselves to ensure that their graduates can meet the demands of a changing (global) job market and the actual impact that college and university study has had.

What we rarely see in the Chronicle, hear on the news or read on the web, are accounts of the vast majority of students and faculty in 90% of the colleges and universities in the country, going about the business of teaching, learning, pursuing basic and applied research and providing service (often as part of applied learning programs) to local and distant communities, agencies, and companies. Unless you spend time in a legitimate sample of colleges or universities on a regular basis, sit in on classes, read the materials students are assigned, read the theses and project reports students produce, analyze the research findings of the faculty and talk with their community and industry partners, you would have no way of knowing the startling success that two-year colleges, four year colleges, public and private colleges and research universities are having – often in the face of substantial under-funding. They continue to prepare the next generation of workers, citizens, managers and leaders while amassing new knowledge and innovative technologies that make it possible to improve the quality of our lives, use our resources more wisely, organize ourselves productively and govern ourselves effectively.   It’s a good thing that our higher education system is working as well as it is, and not the way the critics claim. If they were right, America would have long since lost its competitive edge. New jobs wouldn’t be created at unprecedented rates. Investment capital would have migrated to more friendly locations with better prepared workers, more effective managers and more stable and accountable regulatory systems.  But, that’s not the case. More of the brightest people from all over the world are still trying to make their way into our colleges and universities.

Unfounded claims about the diminishing value of higher education in America have nothing to do with what really happens in 90% or more of the classrooms, laboratories and field-based learning settings around the country. On most campuses, students and faculty are too busy to worry about what the latest self-aggrandizing guest speakers has to say. The amount of class time spent debating the latest front in the culture wars is trivial. The vast majority of media-based critics don’t spend nearly enough time inside colleges and universities to understand how students, teachers and administrators go about their day-to-day tasks. One reason for this is that many of the people voicing unfounded criticisms have neither the knowledge or the skill to understand the substance of what’s happening. It takes no knowledge or skill to repeat unsubstantiated claims aimed at attracting attention on the web.

If everyone teaching and every student studying at a college or university in America were to tweet two lines about the most important thing they are learning or doing research about (under the banner #I’m learning what I need to learn or # I’m teaching what I need to teach), we could quickly rectify the built-up mis-impressions.   My tweet would say (#Teaching urban and environmental planners how to lead and support public and private agencies and organizations in the US and around the world).  

There wouldn’t be space in our tweets, but maybe we could also convince the media (of all kinds) to include stories about the new inventions emerging from university laboratories, the start-ups being created in dorm rooms, and the assistance students are providing to a wide range of communities. Most people would be surprised to learn about the new interdisciplinary majors and concentrations that have been created in data science, biotech, applied social science, design science, conflict resolution, user experience design, and a host of other fields at a wide range of colleges and universities. It would be great to see independent documentation of how the requirements in all kinds of degree programs have changed over the past ten years, and how opportunities for hands-on learning and internships have increased in pre-professional studies programs all over the country.


It shouldn’t be hard to create an overwhelming counter-argument showing that all citizens need constant access (throughout their lives) to the learning opportunities that colleges and universities provide, across many fields, for continued skill development and personal fulfillment.  And, our society depends on the constant flow of scholarly insights and research breakthroughs crucial to our continued well-being.   

Tuesday, November 14, 2017

Universities are underinvesting in efforts to improve the quality of teaching

My friend and colleagues, Michael O’Hare (a Professor at UC-Berkeley), points out in a recent paper entitled The 1.5% Solution: Quality Assurance for Teaching and Research that major research universities underinvest in continuous improvement of their teaching efforts. Given that universities have only two primary tasks -- teaching and research – they ought to be willing to invest as much in improving the quality of their teaching as they do in providing an elaborate infrastructure to support basic and applied research. But, that doesn’t seem to be the case.

O’Hare calculates that major universities devote something like $300,000 to present a semester-long course (i.e. student time, rooms, professor’s salary, web, teaching assistants, etc.). This is what it takes to ensure that faculty and students are present in the right place, at the right time, with the resources they need. He assumes, for planning purposes, that a course is taught to 50 students; faculty at research universities carry a three-course-per-year teaching load, teaching is half a professor’s academic year time, and fringes and benefits are included. To increase student learning by 5%, therefore, O’Hare estimates that it ought to be worth spending $45,000 per year, per professor, to improve the quality of teaching and student learning. Unfortunately, nothing close to that is currently being spent.

O’Hare suggests that universities ought to invest 1.5% of their faculty payroll in quality assurance to improve teaching performance – in much the same way that almost every industry invests in quality assurance as it seeks to improve its efficiency and effectiveness.

O’Hare points out three ways that any and every university department could try (at very modest cost) to improve the quality of its teaching. These follow closely what other segments of the economy have learned about quality improvement. While teaching is not the same as producing most other products or services, I’m convinced (after almost 50 years as a teacher at MIT) that the most basic quality assurance strategies do apply equally well to the university.

Instructors should talk more with each other. You might not believe it, but it is very rare for MIT faculty to sit in on each other’s courses to observe and offer advice on possible ways of improving teaching. And, similarly, faculty members almost never compare notes before classes begin on what they are proposing to cover in their classes and how they intend to go about teaching the material. Everyone is presumed to be a subject matter expert; although why this is presumed to carry over into teaching expertise is beyond me. If a Department made it a policy that every faculty member should expect one of their colleagues to sit in on at least one of their class sessions each semester, then no one would feel singled out. While such assignments could be made absolutely randomly, I don’t see a problem asking the faculty to choose the colleague they want to have sit in. In an after-class discussion, I would hope that the person observing would suggest (1) things I saw you do that I’m going to try myself (and why), and (2) things I’m going to suggest you might find helpful. I don’t think such reports need to be submitted in writing to the relevant Department, but it might be valuable if the person being observed wrote a short summary of what they heard and how they intended to take the feedback on board.

Instructors should make a greater effort to help students learn to teach each other more effectively. Faculty are used to giving students formal feedback (i.e. graded tests and quizzes) on how well they have mastered the material presented in class. It seems to me that faculty could also observe each student giving feedback to a fellow student, and translate that into an occasion to help every student get better at giving constructive feedback and advice to their peers. We need to make it easier for our students to learn from each other. In one of my classes, I ask a few students to make six minute oral presentations -- in a hypothetical work situation -- drawing on what they have learned that week in class. As soon as they are done, every student in the class is asked to use a one page printed template highlighting five or six aspects of the presentation to provide the presenter with immediate feedback. In addition to noting what was done well and what could be improved, each student provides several sentences of commentary. This is all done in five to seven minutes. Each presenter that gets 25 separate sources of feedback on their presentation. This has nothing to do with their grade. Everyone in the class makes at least three oral presentations throughout the term. Each bit of feedback is not anonymous. We always say students learn as much from the other as from their professors, but what do we do to make sure that happens? Nothing. I think faculty should commit to make sure that students learn (as part of every course!) how to help their fellow students learn as much as they can from the class. It should be the faculty member’s responsibility to instruct and support students as they help each other learn. I think that academic departments should insist that faculty make an effort to get better at doing this.

Academic departments should measure everything they do on a continuous basis. There’s nothing new about this idea. Arthur Demming pointed out many years ago, in the context of industrial activities, that anything not measured is not likely to be improved. What to measure, though, in the context of university teaching, is not clear. Most universities currently measure student satisfaction immediately at the end of a semester-long class. More than anything else, this tends to gauge the popularity of the professor. I’ve rarely seen student course evaluations lead to improvements in teaching strategy or performance. What else might be measured? It seems obvious it would be a good idea to measure student knowledge about course material before and after each segment of a class, as well as before and after the entire course. This works if a class is mostly aimed at helping students master substantive knowledge. But, if a class is supposed to teach students how to do something, it makes more sense to give students simulated opportunities to see whether they have mastered the relevant skills. Digital simulations are expensive to build, but they work. Face-to-face role play simulations are not expensive to create, and they work as well. When groups of students in a class play the same game separately, comparisons of the results and student reflections on the experience can give faculty a clear idea of what they are conveying effectively and what needs improvement. I’ve found that saving the last three minutes of a class to ask students what they took away from the session often generates surprising responses. It certainly helps me recalibrate when what they report don’t correspond with what I thought I was teaching! I’m in favor of asking each faculty member what they intend to measure so that they can improve their teaching performance. A university department should provide technical support to make this happen. Then, with the relevant data in hand, each faculty member should commit in writing to experiments or reforms in their next round of teaching, along with a clear indication of what they will measure next time.

I know that there will be substantial resistance to these three simple ideas. Non-tenured faculty will be worried that admitting there is room for improvement in their teaching may somehow jeopardize their reappointment. Tenured faculty have little or no incentive to invest in getting better at teaching. To date, most faculty members at most research universities have not been asked to focus on teaching their students to teach their classmates. This will be seen as an (uncompensated) expansion of the faculty’s role and responsibility. Most faculty won’t know how to do this. Departments will complain that arranging a system of faculty visits to each classroom is a new administrative task for which they are unprepared. Systematically measuring teaching performance (and improvements in teaching performance) is not something that academic administrators know how to do. Nevertheless, I would argue that University leaders should pursue Professor O’Hare’s 1.5% solution to the problem of improving teaching effectiveness. There’s really no good excuse for not getting better at what we do.

Monday, September 18, 2017

Consensus building in the Age of Trump: Strategies for the ADR Field

What’s it like in The Age of Trump?

What’s special about the Age of Trump?  I would point to two things. First, our political leaders (not just the President) no longer feel an obligation to represent all the people in the district or state that elected them. Now, they only feel accountable to their “base.” This is a relatively new occurrence (not just in the United States, but in other democracies as well).   It used to be that after politicians were elected they felt some obligation to represent the interests of all the people in their district or state.  As a result, we now have districts or states (or countries!) where 49.9% of the electorate has no representation.  This makes them feel angry, anxious and defense.  It also makes them feel combative.

The second thing that has changed, and it is related to the first, is that many elected and appointed officials don’t care what evidence or  arguments anyone on “the other side” presents. They won’t allow themselves to be convinced by what anyone outside their base has to say. This means that those in control of the levers of power can pursue whatever agenda they choose, without having to explain or justify their actions in a manner that “an independent observer” would agree is reasonable. This adds to the outrage, and even desperation, of those who feel shut out and unrepresented.  They are especially angry that scientific evidence can be ignored entirely.

So, in the Age of Trump, many people who have not felt powerless before feel powerless now.  They are befuddled by the changes that have occurred in the rules of the game. In the past, they assumed (maybe somewhat naively) that their elected leaders would choose the common good over narrow partisan interests; and, they counted on being able to advocate for what they believe by presenting credible evidence. Now they assume these things won’t happen.

Special challenges for Consensus Builders and other ADR professionals

ADR professionals operate in ways that are intended to ensure fairness – to ensure that all voices are heard and all interests are taken into account when disagreements arise.  In a decision-making or governance system that rejects the idea that the interests of all groups matter, ADR professionals are not quite sure what part they are supposed to play.  The reason those of us in the ADR field have worked hard to add facilitation, mediation and arbitration to public and private efforts to deal with differences, is to enhance the fairness, efficiency, stability and wisdom of decisions that must be made. In the judicial, executive and legislative branches, at every level of government, we have spent decades demonstrating that adding a professional neutral can, in fact, save time, save money and produce better outcomes (and give stakeholders greater control over what happens to them).  In the Age of Trump, ADR professionals now wonder how they can do their job if some of the parties don’t care what the interests of the other parties are; or, some parties feel no obligation to listen to or present credible evidence to support their claims.  Many ADR professionals are extremely upset about these changes. Some are so upset they feel compelled to invest their personal time in political efforts to put things back the way they were. When this involves advocacy, though – even when the professionals involved are operating as private citizens --  it threatens our most important professional asset – our neutrality.

Neutrality is central to the value we add as ADR professionals. Our neutrality allows us to earn the trust of all sides in any dispute.  It also means we can operate in the interstices between the parties and, in so doing, carry messages and provide cover for parties to come together without appearing to be weak. My contention is that many ADR professionals are so upset by what is happening in the Age of Trump that they are ready to risk their perceived neutrality.  While I understand their motives, I am convinced this would be a disaster for the profession.

Increasing demand for ADR assistance in periods of heightened conflict    
.       
The Age of Trump has certainly generated new conflicts of various kinds.  When everyone is escalating their efforts on behalf of their own point-of-view, and more people feel entitled to act in the own interests regardless of the interests of “the other side,” there ought to be increasing demand for our services.  So, in these times, we ought to be able to make a greater contribution (in part because no one else is offering to reconcile those in conflict or pursue problem-solving strategies in spite of the conflicts that exist).  To succeed in the current context, however, will require several things:

1.   First, we have to remind our potential clients that our goal is not to stamp out conflict.  Rather, if they find themselves stalemated and unable to take unilateral action, we can help them find agreeable ways forward in which no one has to give in.  

2.   Second, if well managed, conflict can lead to produce change. Conflict is not a bad thing.  As others have noted, it is the engine of change.  We can help manage conflict in a constructive way.

3.   Third, the fact that parties are inclined to express their interests and concerns with more passion in the Age of Trump, is not a problem for us. In some ways, it should make our work easier. We need to know what the interests and priorities of each party are so we can help them formulate mutually beneficial agreements. We do this by supporting the parties in their search for trades (across issues they value differently) that produce outcomes better for all sides than their BATNAs.

4.   Finally, we need to be sure that our clients understand that our job is not to get anyone to change their beliefs or change their mind.  We try to help parties reach mutually advantageous agreements in spite of their differences.  We do not allow our own point of view or our own preferences to intrudce.


Harmonizing Interests through dialogue vs. assisted problem-solving

A segment of the ADR profession has been moving in the direction of facilitating dialogue. Indeed, there are many who think we should devote a substantial portion of our time to helping Red and Blue (and others who have conflicting values) learn to talk with and understand each other more effectively.  I’m personally not convinced that dialogue for its own sake should be a high priority for the ADR profession.  I don’t think greater understanding is going to lead to harmonization of conflicting values and interests.  Perhaps we can help people with diametrically opposed views hear each other, but I’m not sure that’s as important as working out agreements in specific contexts.  I think we should emphasize problem-solving -- generating “a workable peace” when some action needs to be taken -- rather than devoting time to generating a deeper understanding of the sources of disagreement.  I don’t think Red and Blue need to believe the same things to find ways of taking action.

The key is to convince as many stakeholders as possible that there is a way to meet their interests in a manner that will get them more than what no agreement (stalemate) guarantees, and more than they are likely to get if they continue to battle.

Coming back to neutrality

As I have already said, we must be absolutely diligent about maintaining our neutrality – no matter how strongly we feel personally – if we want to make a case for the value we add. I’m convinced that the way we act in our personal lives may shape how we are perceived in our professional roles. While each of us has opportunities to take direct political action in our personal lives, remember that if you sign a petition, march peacefully, write op eds, or lobby for your point of view, there is no way anyone on the other side will accept you as a dispute resolution professional they can trust. We need to think very carefully about how we carry ourselves in public. I promise you that whatever actions we take in our personal lives will be noted. Being perceived as neutrals in the Age of Trump is, in my view, the key to contributing to conflict resolution in these difficult times.


[Based on the keynote presentation I made to the Biennial Conference of the New England Association for Conflict Resolution (NEACR) in Waltham, Massachusetts on 9/8/17.]