Good information and bad coverage of college costs

We’ve gotten some rare good news about transparency in college costs: the U.S. Department of Education’s new College Scorecard, though limited in many ways, gives students and their families quick, easy ways to understand some of the realities of college costs normally hidden by simplistic discussions of sticker prices. But we need to understand what the tools do and don’t offer.

Today’s Chronicle of Higher Ed is not helping. Costs are at the center of Beckie Supiano’s “What Actual High Schoolers Think of the New College Scorecard.” The piece notes some of the advantages of the College Scorecard, but its pessimistic ending frets about students having too much information to process, and the final–and memorable–anecdote of a student using the site describes an important moment in learning about college costs:

Jimena [Alvarez, a high school sophomore] searched for the University of Miami, and was immediately presented with its $30,000 average annual cost. Her reaction? “Oh, no, I can’t go there,” she said. “Or maybe I can, but I’ll have to have a lot of student loans.”

The Scorecard provides further detail on what students might pay at each college, including information on typical debt, a breakdown of net price by income band, and a link to the college’s net-price calculator. But Jimena had a strong initial reaction, and it wasn’t clear she ever made it far enough into Miami’s data to realize she could get a more personalized price.

The moral of the story seems to be that poor Jimena Alvarez’s “strong initial reaction” prevented her from finding the important truth of the story: if only she had gone “far enough into Miami’s data” to find her personalized price, she would have gained a subtler and more valuable understanding. The curious omission of what she would have found leaves the reader to think that more information would have reassured her and perhaps maintained her interest in Miami.

But the condescension is unwarranted. In fact, Alvarez understood exactly what the College Scorecard most valuably conveys: Miami is an extremely expensive university. That average cost of $30,394 is almost double the mean, as Alvarez could see clearly on the chart. If she did dig deeper, she would find even more daunting news: the annual cost for families with incomes of $0-30,000 is a staggering $20,783. Florida State’s cost for such families is $11,542. Harvard’s is $3,897. The differences are just as stark in the other income brackets under $100,000.

As limited as the College Scorecard is in some ways, this anecdote presents one of its strengths: the Scorecard emphasizes costs rather than tuition prices, allowing it to convey a much more accurate sense of relative affordability than most conversations of higher ed involve. The victory of the Scorecard, in fact, lies in an absence: Supiano’s article never uses the word “tuition.”

The questions we ask our students (and the ones they answer)

The accreditors are coming around to our campus again soon, so assessment is on the march. We held a two-day writing assessment workshop on campus over the summer, and I participated in scoring essays written by first-year students the previous fall. I came away just as skeptical about the quantitative assessment of college writing as I have always been, but I nonetheless found my self shaken by how much the exercise showed me about the pedagogy of college writing.

Recognizing the limitations of giving everybody the same prompt, detached from any connection to course content, the framers of our assessment project—a group of skilled and thoughtful people—gave the teaching faculty some directions about framing their writing prompts but left room for tailoring them to each class. This approach represented our effort to avoid the Scylla and Charybdis of writing assessment: the distorting artificiality of standard exercises, on the one hand, and, on the other, the inability of standardized questions to capture the kind of context-specific scholarship that we most want our student to practice. I was on my first committee trying to navigate those waters in about 2002; I haven’t yet seen anyone find safe passage.

In this latest assessment exercise, the variation among the faculty-written prompts was dizzying. Some were detailed, to the extent that they sounded like guidance for writing full-length scholarly articles. Some consisted of a single sentence inviting the student to analyze two writers, period. Some asked for summary followed by analysis. Some asked students to respond to passages that we faculty had trouble understanding out of context. My point is not that the prompts were bad but that they were so varied that it would be hard to imagine them producing writing that we could assess with a consistent set of criteria.

The real surprise came from reading the students’ essays. In crucial ways, their writing revealed that the students often had not read the prompts carefully, and they were right not to do so. The prompts asked for different kinds of writing, but the students responded in largely uniform ways. They understood the assessment exercise. Most of them have done similar things throughout their elementary and secondary educations: they knew they were supposed to write a short essay, conventionally structured, with some quoted evidence sprinkled in.

And indeed, that’s exactly what we assessed. With our rubrics and inter-rater reliability training in place, we were almost always able to score the essays in a straightforward way because the students knew to rely on the skills that had been praised and rewarded so often in their educations, no matter what their teachers tried to tell them on a given assignment.

The students’ ability to perform assessment-ready writing humbled me in two ways. First, it reminded me that students have often deduced my expectations when I have not explained everything that they need, even though I tend to explain a lot. The assessment exercise showed me how much we all lean on unstated expectations. Second, a gained a new way of thinking about how difficult I have found it to try new kinds of assignments, even with students who are curious, creative, and ambitious. Now I see such assignments in this light: every time I take a step away from an assignment that boils down to “Write an essay of length X on topic Y,” I remove some of my students’ confidence that they know what implicitly earns rewards in academic writing, even if the explicit requirements are incomplete or difficult to understand.

I still want to push my students and myself to break away from conventional essay assignments. I want them to become capable editors as well as readers, to give presentations that deploy ironic as well as explanatory slides, to work productively as members of creative teams that must evaluate their own work and choose how to share it. As I ask them to learn these skills, however, I will do so with a renewed awareness of how much I am requiring them to leave behind the techniques and assumptions that have gotten them to this college in the first place, and I need a similar sense of humility as I encourage colleagues to try new techniques and assignments. I have been thinking especially about the dynamics of classroom authority, race, gender, sexuality, class, and disability: it is easier for some of us than others to ask students to step away from expectations they know they can meet.

I am just beginning to turn from these thoughts to building a structured sense of how to respond constructively to them. From conversations I have had so far, I suspect that my thinking will draw heavily on the methods of my colleagues in the creative arts, for whom it is nothing new to ask students to express vulnerability, to judge one another’s work constructively, and to work in teams whose members have complementary skills. More to come.

Failing badly, failing well

When I go to conference panels on the digital humanities or public humanities, I find that many presentations begin with a dismissal of the kind of assignment where a student writes a paper merely for the audience of a teacher. In many ways, I share this suspicion of the two-person academic conversation; though it has value as a means of practicing formal writing and receiving a careful response, we can replicate and add to that value in collaborative, public-minded projects.

As a community, however, we may not have fully appreciated another advantage of the traditionally graded paper assignment: it fails well.

Students, of course, encounter all kinds of obstacles, from false starts in their research to personal or medical problems to competing priorities. When I assign a traditional paper, I can respond to these situations with a set of tools that I have learned to handle reasonably well: extensions, incompletes, a B-. Whatever has gone wrong, and however I respond, the problem remains mainly between the student and me.

The more I create assignments based on teamwork, editorial practices, and audiences beyond the classroom, the more I find I create models that fail badly. One student depends on another meeting a deadline; mistakes become public; the boundaries of the semester limit my ability to alter deadlines and other expectations for collaborative groups.

Now, as I encourage colleagues to try new kinds of tools and practices, I feel another layer of responsibility here: I need to be able to help develop pedagogies that both succeed and fail well. How have you worked to make collaborative and digital projects fail better?

Locating faculty offices

Here’s a question I’ve been pondering lately, in the space planning process that commands much of my time and attention these days: should we organize faculty office spaces by department?

In almost every academic building I know of, members of a given department have contiguous offices, or as close to contiguous as possible. I see the benefits of contiguity: a sense of departmental identity and ownership of the space around the offices, easy navigation for students and others looking for a member of a given department, smoothing of department-based logistics such as a student getting signatures from an advisor and chair from the same department.

On the other hand, if we want to encourage collaboration across disciplinary boundaries, departmental contiguity seems, on its face, the worst way to represent and encourage such work. Furthermore, the traditional arrangement reinforces the sense of alienation often felt by faculty members who do not have colleagues in their discipline, perhaps especially at small institutions. Even if we assign such people to departments administratively, arranging offices by department can remind such people daily that they do not have a disciplinary fit: I’m in the sociology building, one might have to say, even though I’m not a sociologist. This year, I have heard high-level people at two colleges saying that if they could assign offices from scratch, they would do so by lottery, letting biologists and poets mix in a literally random arrangement.

In my building, we have happened upon a third way that I like a lot. In a fairly small building of twentysomething faculty offices, we have the faculty serving three majors: English, History, and Gender, Women’s, and Sexuality Studies. Anyone with even a little sense of the campus’s academic geography knows where to find those faculty, but within the building, we are shuffled; any given office can belong to any faculty member, and we even move around once in a while. We thus combine the benefits of geographical identity with those of a mild version of mixing.

In our current space planning process, we are contemplating a new building that will house the faculty of the social studies division and humanists except for those in the fine arts. I wonder whether we might attempt office assignments by cluster, capturing some of the fluidity of interdisciplinarity while retaining a general sense of campus locality. I wonder whether any readers have experiences, good or bad, with office arrangements other than departmental blocks.

Awarding credit for online courses is not optional.

A point came up in a recent meeting that I had not yet considered. I was told that the issue of whether to award transfer credits for online courses came before one of our committees. The registrar told the committee that we’ve probably awarded such credit already, because in most cases there’s no way to tell from a transcript whether a course was given online.

Well, then. Ready or not . . . .

A thought about the pedagogy of group work

Every job I’ve had–in education, the corporate world, or bureaucratic administration–has relied heavily on work done by teams of employees. None of them has organized those teams in ways that look much like the “group work” many of us foster in college classes. Is teamwork the opposite of group work?

More on Jane Austen and stylistic signatures

Ted Underwood responded to my post on Jane Austen’s style–pointing out the prevalence of adverbs, “to be” constructions, and terms of certainty–by raising the issue of baseline comparisons: “I’d like to know whether this is something about Austen in particular, or whether it’s a characteristic feature of a period/genre. I don’t intuitively know which is more likely.”

Let’s explore! I’m again using Ted’s corpus and software, comparing a given author’s work to the whole corpus. This file is a transcript of the commands and output I’m interpreting below.

I thought the most conventional guess of an author to produce results similar to Austen would be Maria Edgeworth. Here’s the list for her:

 WORDS OVERREPRESENTED BY MANN-WHITNEY RHO 
1	understand     	0.937	271	
2	recollect      	0.923	309	
3	talking        	0.916	127	
4	know           	0.916	523	
5	could          	0.913	754	
6	provoking      	0.912	41.9	
7	nonsense       	0.911	62.3	
8	perfectly      	0.905	119	
9	explain        	0.903	192	
10	continually    	0.889	95.4	
11	tired          	0.888	76	
12	going          	0.888	205	
13	do             	0.884	586	
14	dear           	0.88	792	
15	sorry          	0.879	79.5	
16	satisfied      	0.879	93.8	
17	yesterday      	0.879	48.9	
18	liked          	0.875	48.1	
19	spoiled        	0.874	19.6	
20	directly       	0.869	77.2	
21	quite          	0.869	136	
22	please         	0.868	182	
23	you            	0.868	2467	
24	repeated       	0.868	233	
25	decide         	0.866	101	
26	afraid         	0.864	148	
27	repeating      	0.862	52.7	
28	thank          	0.862	115	
29	manage         	0.86	44	
30	guess          	0.86	97.8	
31	sure           	0.859	290	
32	ashamed        	0.857	35.4	
33	put            	0.856	140	
34	admiration     	0.855	90.5	
35	disappointed   	0.855	44.8	
36	surprised      	0.855	75.6	
37	tiresome       	0.853	37.2	
38	especially     	0.853	76.3	
39	not            	0.853	802	
40	reading        	0.853	80.1	
41	dressing       	0.852	9.04	
42	said           	0.852	2783	
43	formerly       	0.851	50	
44	understanding  	0.851	103	
45	possible       	0.85	157	
46	because        	0.85	261	
47	really         	0.85	125	
48	any            	0.85	632	
49	saw            	0.85	183	
50	think          	0.85	173	

My unsystematic eyeballs see no forms of “to be” and far fewer adverbs than populated Austen’s list. Terms of cognition seem especially prominent:

 WORDS OVERREPRESENTED BY MANN-WHITNEY RHO 
1	understand     	0.937	271	
2	recollect      	0.923	309	
4	know           	0.916	523	
9	explain        	0.903	192	
25	decide         	0.866	101	
30	guess          	0.86	97.8	
44	understanding  	0.851	103	
50	think          	0.85	173	

What about Charlotte Lennox? Her list has “extremely” and “wholly” in the first and sixth places, but only one other “-ly” adverb (“instantly” at #29). Lennox’s vocabulary emphasizes the dynamics of sociability. Highlights:

 WORDS OVERREPRESENTED BY MANN-WHITNEY RHO 
2	civility       	0.97	117	
7	amiable        	0.959	353	
8	accompany      	0.959	55.8	
11	conversation   	0.957	258	
12	behaviour      	0.954	419	
13	mortified      	0.949	34.6	
14	mortification  	0.948	113	
15	received       	0.945	119	
18	amusements     	0.939	32.3	
19	entreaties     	0.937	54.9	
20	apprehensions  	0.937	89.4	
21	attentions     	0.936	70.9	
27	conduct        	0.929	195	
28	insisted       	0.928	80.6	
29	instantly      	0.927	209	
30	countenance    	0.925	123	
31	situation      	0.924	260	
33	visit          	0.923	107	
35	arrival        	0.922	83.5	
36	acknowledged   	0.92	53	
37	reception      	0.92	46.8	
38	circumstance   	0.919	98.7	
41	relations      	0.917	84.3	
42	letter         	0.916	312	
43	politeness     	0.916	110	
44	shocked        	0.914	89.2	
45	accident       	0.913	74.1	
46	inform         	0.913	74.8	
47	acquaintance   	0.912	131	
50	ordered        	0.91	66.6	

Walter Scott’s list of 50 (using only his fiction for the sake of comparison) includes only three adverbs, none in his top 30, and the highest-ranking is an adverb of action: “hastily.” Scott’s list evokes military contexts and especially hierarchies of authority:

1	answered       	0.958	2519	
4	warrant        	0.944	501	
8	risk           	0.93	263	
13	permit         	0.914	247	
14	trusty         	0.913	169	
19	weapon         	0.905	235	
22	boot           	0.902	127	
23	followers      	0.898	505	
27	domestics      	0.897	122	
30	commanded      	0.895	222	
32	courtesy       	0.894	262	
33	quarrel        	0.893	183	
34	kinsman        	0.892	432	
35	assistance     	0.892	248	
37	saddle         	0.891	109	
43	displeasure    	0.89	123	
44	attendance     	0.889	162	
47	willingly      	0.889	170	

Hannah More’s list (again, using only her fiction) is unsurprisingly packed with religious terminology, and I see little overlap between her list and the others.

If you want motion in your novel, open your James Fenimore Cooper:

 WORDS OVERREPRESENTED BY MANN-WHITNEY RHO 
1	movements      	0.979	903	
3	movement       	0.97	576	
4	direction      	0.961	579	
6	commenced      	0.958	374	
8	companion      	0.952	645	
18	distance       	0.915	552	
20	quest          	0.913	190	
21	returned       	0.913	829	
27	companions     	0.902	268	
37	disappeared    	0.894	137	
38	preparations   	0.893	93.3	
39	placing        	0.893	74.7	
40	position       	0.892	168	

At this point, I think we have at least a preliminary answer to our question: the prevalence of adverbs and so forth in Austen’s works is indeed characteristic of Austen herself, rather than her period or genre.

This little exploration was great fun for me, as the results returned a mix of new insights–particularly about Austen and Edgeworth–and reassuring common-sense confirmation that the tool identifies the characteristic thematic emphases of Scott and More. In a follow-up post, I’ll offer some quick thoughts about other uses of this kind of word-frequency analysis, from the perspective of a beginning user with a pedagogical emphasis.

Jane Austen and contemporary prose style

I’m on leave this semester to do work in the Digital Humanities, so I’ll be posting a lot about that. My interest in DH is not–or has not been–quantitative, but I am expanding my range by dabbling in quantitative methods, currently with the help of Ted Underwood’s wonderful introduction to the topic.

At the end of Ted’s post, he provides a dataset and a program he wrote to find groups of words that form something like stylistic signatures in authors and genres. I’ve been playing with the program, with fascinating results. I’ll share one here. This is the list of overrepresented words in Jane Austen’s works according to one of the measures Ted uses:


WORDS OVERREPRESENTED BY MANN-WHITNEY RHO
1 very 0.985 3283
2 wishing 0.984 154
3 staying 0.982 176
4 satisfied 0.977 188
5 fortnight 0.975 152
6 herself 0.973 1553
7 agreeable 0.973 350
8 be 0.971 2645
9 smallest 0.971 182
10 any 0.971 1112
11 really 0.968 555
12 acquaintance 0.967 462
13 excessively 0.967 91.8
14 nothing 0.967 639
15 assure 0.965 268
16 settled 0.964 261
17 marrying 0.964 196
18 much 0.964 841
19 attentions 0.962 212
20 encouraging 0.961 51
21 directly 0.96 290
22 deal 0.96 329
23 warmly 0.96 96.3
24 must 0.96 1141
25 sorry 0.958 198
26 certainly 0.957 323
27 not 0.957 2023
28 tolerably 0.957 95.9
29 handsome 0.957 136
30 quite 0.956 765
31 been 0.956 899
32 exactly 0.955 248
33 invitation 0.955 194
34 being 0.954 699
35 obliged 0.954 280
36 seeing 0.954 206
37 always 0.953 470
38 pleasantly 0.952 37.8
39 delighted 0.951 107
40 talked 0.95 342
41 perfectly 0.949 283
42 distressing 0.949 61.5
43 solicitude 0.949 89.7
44 comfortable 0.948 167
45 walking 0.948 129
46 continuing 0.947 39.1
47 engaged 0.945 120
48 enjoyment 0.942 122
49 dislike 0.941 86.7
50 talking 0.941 194

The list is interesting in many ways, especially in comparison to the corresponding lists for other authors, but I want to emphasize a side point. “Very” tops the list, and it may also top the list of words I discourage my students from using in their papers. (Mark Twain: “Substitute ‘damn’ every time you’re inclined to write ‘very;’ your editor will delete it and the writing will be just as it should be.”) And that’s not all: I push students to minimize adverbs, intensifiers, terms of certainty, and “to be” constructions. Such words infuse Austen’s list:


WORDS OVERREPRESENTED BY MANN-WHITNEY RHO
1 very 0.985 3283
8 be 0.971 2645
11 really 0.968 555
13 excessively 0.967 91.8
21 directly 0.96 290
23 warmly 0.96 96.3
26 certainly 0.957 323
28 tolerably 0.957 95.9
30 quite 0.956 765
31 been 0.956 899
32 exactly 0.955 248
34 being 0.954 699
37 always 0.953 470
38 pleasantly 0.952 37.8
41 perfectly 0.949 283

I’ve thought many times about writing a handout on style that outlines the conventional guidelines of modern, essayistic style with counterexamples from great literature. (What would Hamlet do without “to be”?) But this list encourages me to take such thinking a step further: Austen’s case alone could become the foundation of a unit on voice, style, and convention.

Research after writing

Here’s a question that has bugged me for a long time: how can we teach research skills at the introductory level? Or, even trickier, how can we teach research in a non-disciplinary skills course at the introductory level? This semester, I’m trying out a new answer: teaching research by having students research papers they’ve already written.

Every first-semester Grinnell student takes a class we call the Tutorial: a content-based introduction to college-level skills in writing, reading, discussion, presentation, information literacy, and more. (The course is famously overloaded with priorities.) My versions of the course emphasize writing skills, and in the past, I have chosen not to do much with research beyond quotation and citation skills and an introduction to our library facilities; that is, I have covered information literacy rather than independent research skills, leaving the latter to upper-level courses. In thinking about adding a research component for Tutorial, I have always gotten stuck on the problem of assigning research when students cannot read enough to get a strong sense of a research field. Under such circumstances, how can I avoid turning the “research” into the reading of a few semi-random sources, chosen for their vague relationship to a developing paper topic?

This semester, I will try a new approach: building research into the revision of papers. The students will assemble annotated bibliographies of secondary sources for the course’s final portfolios, and they will choose the readings based on issues that arise in my initial responses to their papers. Because the course is portfolio-based, we can identify areas in which secondary sources would help amplify and refine a given argument. The students’ research will thus have a sense of purpose often lacking in preliminary bibliographies: they will go to secondary sources to solve specific problems. Here is the assignment. Comments are most welcome. If this approach works well, I will work to generalize its application to other introductory courses.

css.php