Tuesday, July 31, 2018

How Important is it to Present at Conferences Early in One’s Career? (Part 2)

Way back when, Julie Gold asked: “How important is it, really, to present papers early in one’s career?” (Research Whisperer’s Facebook page, 3 Feb 2018).
This post is part 2 of the answers received for Julie Gold’s question. If you missed it, here’s part 1!
I must admit my initial response was based around a preference for breaking down the dependence on conferences as THE place to share findings or research ideas. This was, in part, because of the assumptions about researcher mobility and material support that this entails.
However, on reading my trusted colleagues’ views and reflecting on the dynamics of academia more generally, I’ve shifted my opinions.
This post features responses from Kylie ‘Happy Academic’ Ball, Kerstin ‘Postdoc Training’ Fritsches, and urban archeologist Sarah Hayes.

Kylie Ball (Psychology / School of Exercise and Nutritional Science, Deakin Uni / Happy Academic / @kylieball3)

Early in my career, I regularly attended conferences in my public health / psychology field, and got great value out of them in these ways:
  • The chance to share and profile my work with interested others.
  • Benchmarking my and my team’s research.
  • Making connections that led over time to some great collaborations (and also great friendships).
  • Learning new things and coming away inspired and energised by new ideas.
  • And – OK, I won’t deny it – the groupie’s thrill of meeting in the flesh those academic heroes whose work had long inspired me. [There’s nothing wrong with Academic Fandom! – Ed.]
However, recently, I and others I know have eased off the conference circuit somewhat (family, financial or other reasons) and I don’t think it’s a career deal-breaker. These days, many of these benefits (except maybe the last one) can now be achieved virtually, so if circumstances preclude regular conference attendance I’d just encourage people to be diligent and creative about seeking these outcomes through other avenues.

Kerstin Fritsches (Neuroscience / Director, Postdoc Training / @postdoctraining

I think everyone has to work out for themselves how useful conferences are and the only way to find out is by presenting at a few, if you have the opportunity.
I don’t think my actual presentations as an Honours and then PhD student were particularly important for my career. However, the events taught me a lot about communicating my research, meeting and having interesting conversations with junior and senior people in my field (neuroethology), getting excited about new ideas, and learning about progress in my broader field that I wouldn’t have picked up any other way. As students, we were only funded to go to a conference if we presented, which is a good policy – you get to experience it as a full participant, nerves and all!
The best conferences for me were the smaller ones (up to 300 participants) and I found poster presentations (in my field often the only presentation option for pre-PhD researchers) a great way to meet people and get the chance to discuss my work. I was pleasantly surprised by how interested and approachable researchers were (both junior and senior), and that alone gave me a lot of confidence to contact people later on.
The other thing to consider is that you may be more likely to find the funding to go to conferences early in your career (supervisor’s funding or student travel scholarships). Once you rely more on your own funding, post-PhD, finding money and the time to go can be a lot more difficult. By then, though, you are much more embedded in your field and can build on the relationships you created earlier.

Sarah Hayes (Archeology / Alfred Deakin Centre, Deakin Uni / @SarahHResearch

An underdeveloped conference paper (or worse, year after year of poor papers) isn’t going to do you any favours in terms of developing esteem within your discipline. This is tricky for when you are starting out, and I’d say get some traction on your ideas by presenting your paper to your department or at a local symposium before going for the big, national conference. Better to give conference papers less often and do them really well. Consider finding a mentor who you think has great skills in presenting, and who can help you prepare.
When you do go for the national conference, be strategic about it, especially if you have limited conference travel funding from your institution. Definitely attend when your conference is in your home town, and keep an eye out for grant or funding opportunities for travel (some of which now have provisions to help you travel with kids).
If you only attend two conferences while you are an emerging researcher, do the first in the final year of your PhD (you’ll be confident talking about your PhD research and you’ll gain contacts for job opportunities), and do the second the year before you go for a big grant (i.e. ARC DECRA) so your potential assessors know who you are.
If it’s really difficult for you to attend, make the most of all the digital networking opportunities available to you in order to build your reputation. Consider other ways of making yourself known to your colleagues and especially the ‘old guard’ within your discipline. Maybe shoot them an email about how great you think their recently published paper is! I’ve had three or four such emails and the positive impression this created of the senders has stayed with me.
Finally, when you do attend a conference, I’d suggest not focusing too much on what you get out of it. See anything you gain as a bonus – think about what you can contribute to other conference attendees and the discipline. In the end, this is far more likely to make you a valued member of your discipline.

So, there you have it! Two posts full of advice and shared experiences from members of the Research Whisperer brains trust!
While I’m still highly critical – and jaundiced, it must be said – about aspects of conference-going (see Staying still), I know that I speak from a particular position of academic sociocultural privilege (continuing role, relatively established research cred, strong community of advocates). For many emerging scholars, the stakes are higher when it comes to building profile and establishing their professional networks.
If our sector considers conferences as absolutely essential parts of researcher development and academic career-building, and emerging researchers are heavily judged on this, we must work to ensure that these scholars are supported to participate and benefit from them as equitably and consistently as possible.
That said, let’s not just do things ‘the conference way’ just because it’s what we’re used to. As some of the respondents pointed out, there are many ways to connect with your discipline and peers in the area, network broadly and internationally, build relationships to collaborate on research, etc.
Our colleagues who may not make it to conferences for any number of reasons should not be judged as lesser scholars. As we’re working to improve the equity of our gatherings, let’s also raise awareness and consideration of who may not be there and why.

How Important is it to Present at Conferences Early in One’s Career? (Part 1)

Photo by JJ | http://www.flickr.com/photos/tattoodjay
Way back when, Julie Gold asked: “How important is it, really, to present papers early in one’s career?” (Research Whisperer’s Facebook page, 3 Feb 2018).
I took Julie’s question to be about presenting at conferences and my short, immediate answer (in my head) after I saw it was this:
“Even though many things have changed in academia, and I’d argue that most people could do with less conference-ing (rather than more), though getting the word out about your work early in your career is very important and sustained networking even more so.
There are many ways to do this, though, that don’t HAVE to be conferences – it’s just that conferences still retain a standard allure for academia that will take a longer time to shift…”
Then I stopped and thought a bit more about what I was saying. I realised how narrow my own experiences were (humanities, based in Australia, relatively recent social media zealot) in the broader pool of academic conference lore.
In addition, I’m speaking from a ‘mid-career’ position in the system, with established networks and an established track-record of conference presentation and attendance.
So, I approached a wider circle of Research Whisperer colleagues from various disciplines, perspectives and career stages. They were brilliant! They responded with thoughtful, useful advice and fascinating sharing of their experiences.
In fact, their responses were too good (and, therefore, hard to slice down) so this planned single post has become a 2-parter!
Here’s part one, featuring Inger ‘Thesis Whisperer’ Mewburn, Dani Barrington, Euan Ritchie, and Eva Alisic.

Inger Mewburn (Director – Research Training, ANU / Thesis Whisperer / @thesiswhisperer

It’s very discipline specific, even in the sciences. For example, computer science tends to publish mostly through conference proceedings these days, and they like the thesis by compilation/publication format, so they tend to encourage candidates to start early. Other science disciplines, like astrophysics, are big on posters as a conference entry point, with papers being something you get to in the middle years.
Humanities varies, but in my experience, it’s usually later on in candidature that supervisors start to encourage conference presentations. In the humanities, you need to shape the topic much more than many science candidates, who might be handed some work started by a previous student.
TSEEN gatecrashes here: I wanted to re-post an excellent comment by Owen S. from my post ‘Staying still’ (which interrogated the idea of always having to attend conferences) – it’s particularly relevant after Inger’s comments and a fascinating insight into one discipline’s protocols.
Owen S.: Computer Science conferences are an interesting case study. You may be aware that rankings have been attached to international Computer Science conferences via the Computing Ranking & Education (CORE) portal. In this way, a paper accepted into a Computer Science conference becomes a research output with a quality rank, analogous to journal articles with SCImago rankings. The rationale being that the field moves so quickly, journal articles are out of date by the time they’re published, hence conferences are the only way to ensure current relevance. CORE conferences are counted in the ERA exercise, which is the only discipline area I’m aware of which allows conference papers into the assessment. As a result, Computer Science Schools adopt approaches which encourage conference attendance via workload model allowances and financial incentive schemes.
The CORE system is only recognised in Australia, and there’s a lack of agreement among Computer Scientists about its validity. Several Computer Scientists I’ve spoken to think CORE is irrelevant and that publishing in high quality journals should still be priority #1.
Nevertheless, there is significant pressure for Computer Scientists (including research higher degree candidates) to attend conferences and present papers. When someone can’t afford to travel and present their paper, they sometimes purchase the conference registration and have a proxy present on their behalf. This is heavily frowned upon within the field due to the prevailing travel addiction.
So, all in all, Computer Science academia in Australia has set itself up a system which makes it almost impossible to break out of the conference cycle.
And now back to the fabulous contributions from RW’s brains trust…

Dani Barrington (Water, Sanitation and Health Engineering / School of Civil Engineering, Leeds Uni / @dani_barrington)

I had a fantastic PhD supervisor, Professor Anas Ghadouani, who pushed me to attend and present at conferences from day one.
I was certainly out of my comfort zone, introducing myself and my work to professors I admired, but it got easier with time. Many of my career opportunities have arisen through word-of-mouth from this type of exposure – from being asked to partner on grant applications to giving interviews on national radio. Perhaps more importantly though, I am an academic because I believe in our role as public intellectuals, and attending and presenting at conferences allows me to engage in national and global debates on topics that get me up in the morning (figuratively and literally – I work on toilets).
There is certainly a growing space for online conferences and webinars, but I don’t think you can beat the informal conversations that happen following conference presentations or over a beer at the conference reception. I strongly believe these receptions should always be included in conference registration fees as they are essential to the conference experience, and ECRs in particular shouldn’t be excluded due to personal or institutional budgets that preclude participation in “social events”.

Euan Ritchie (Ecology / School of Life and Environmental Sciences, Deakin Uni / @euanritchie1)

It’s very important you attend and present at a conference early in your career. The benefits of doing so are many and occur over the short and longer term.
Conferences inspire, the energy around them is often infectious and can be used to motivate you to explore new and cool ideas, and they can potentially launch your career. Never underestimate people knowing you as a person and not just as a CV or Google Scholar entry.
Networks are forged on relationships and personalities. Conferences are an opportunity to show not just the awesome research you’re doing, but that you’re a clever and nice person who people will want to work with, give a job to, etc.
Research is hard, and can be quite a lonely pursuit at times. Going to conferences makes you aware of the wonderful community you’re a part of and contribute to.
Lastly, once you’ve gone to one conference, they get much easier. People know you and conversations and networking becomes easier. Remember, though, that once you’re embedded in the research community, you need to help others to also become a part of it!

Eva Alisic (Psychology / Melbourne School of Population and Global Health, Uni of Melbourne / @evaalisic

It’s a great question, definitely worth a post! My thinking is that just giving a talk at a big conference is probably not enormously helpful unless you have exceptionally striking findings and there will be an audience. At some of the larger conferences that I know, people do sit in on talks but also quickly get to saturation point / information overload, so not much of an ECR’s story will stick.
What I DO think is useful are other aspects of conferences, such as pre-conference workshops and ECR-focused initiatives. As an example, I have been involved with the creation and management of several ‘Paper in a Day’ events that precede the major disciplinary event. These have been really beneficial to research students and ECRs. Connecting with national or international peers as an ECR (and doing some intellectual work together, as with Paper in a Day) is very valuable, and happens more effectively in person than via email.
In brief, my answer to Julie’s question would be: traditional conferences are mostly useful for the non-paper aspects of your participation, especially if there is some collaborative writing/discussion time with your international peers. These will be your friends, colleagues, inspirations, and challengers for the years to come, so it’s worth investing in building these connections and collaborations.

Next week’s post is Part 2 of the responses to ‘how important is it to present papers early in one’s career?’, and includes takes from Kylie ‘Happy Academic’ Ball, the fabulous Kerstin ‘Postdoc Training’ Fritsches, and the lovely Sarah Hayes.

No, Private Schools Aren’t Better at Educating Kids Than Public Schools: Why This New Study Matters

by Valerie Strauss, Common Dreams:
(Photo: iStock)

Despite evidence showing otherwise, it remains conventional wisdom in many parts of the education world that private schools do a better job of educating students, with superior standardized test scores and outcomes. It is one of the claims that some supporters of school choice make in arguing that the public should pay for private school education.

The only problem? It isn’t true, a new study confirms.

University of Virginia researchers who looked at data from more than 1,000 students found that all of the advantages supposedly conferred by private education evaporate when socio-demographic characteristics are factored in. There was also no evidence found to suggest that low-income children or children enrolled in urban schools benefit more from private school enrollment.

The results confirm what earlier research found but are especially important amid a movement to privatize public education — encouraged by Education Secretary Betsy DeVos — based in part on the faulty assumption that public schools are inferior to private ones.

DeVos has called traditional public schools a “dead end” and long supported the expansion of voucher and similar programs that use public money for private and religious school education. According to the National Conference of State Legislatures, 27 states and the District of Columbia have policies allowing public money to be used for private education through school vouchers, scholarship tax credits and education savings grants.

The new study was conducted by Robert C. Pianta, dean of U-Va.’s Curry School of Education and a professor of education and psychology, and Arya Ansari, a postdoctoral research associate at U-Va.’s Center for Advanced Study for Teaching and Learning.

“You only need to control for family income and there’s no advantage,” Pianta said in an interview. “So when you first look, without controlling for anything, the kids who go to private schools are far and away outperforming the public school kids. And as soon as you control for family income and parents’ education level, that difference is eliminated completely.”

Kids who come from homes with higher incomes and parental education achievement offer young children — from birth through age 5 — educational resources and stimulation that other children don’t get. These conditions presumably carry on through the school years, Pianta said.

Pianta and Ansari used a longitudinal study of a large and diverse sample of children to examine the extent to which attending private schools predicts achievement and social and personal outcomes at age 15.

They started with data from the National Institute of Child Health and Human Development’s Study of Early Child Care and Youth Development. That was a 10-site research project that followed children from birth to 15 years with a common study protocol, including an annual interview and observations at home and school and in the neighborhood. In that years-long study, there were 1,364 families that became study participants, with ethnicity and household income largely representative of the U.S. population, though Pianta and Ansari looked at 1,097 of those children for their analysis.

The Pianta-Ansari study examined not only academic achievement, “which has been the sole focus of all evaluations of private schooling reported to date, but also students’ social adjustment, attitudes and motivation, and even risky behavior, all of which one assumes might be associated with private school education, given studies demonstrating schooling effects on such factors.” It said:
“In short, despite the frequent and pronounced arguments in favor of the use of vouchers or other mechanisms to support enrollment in private schools as a solution for vulnerable children and families attending local or neighborhood schools, the present study found no evidence that private schools, net of family background (particularly income), are more effective for promoting student success.”
And it says this:
“In sum, we find no evidence for policies that would support widespread enrollment in private schools, as a group, as a solution for achievement gaps associated with income or race. In most discussions of such gaps and educational opportunities, it is assumed that poor children attend poor quality schools, and that their families, given resources and flexibility, could choose among the existing supply of private schools to select and then enroll their children in a school that is more effective and a better match for their student’s needs. It is not at all clear that this logic holds in the real world of a limited supply of effective schools (both private and public) and the indication that once one accounts for family background, the existing supply of heterogeneous private schools (from which parents select) does not result in a superior education (even for higher income students).”
Pianta and Ansari note in the study that previous research on the impact of school voucher programs “cast doubt on any clear conclusion that private schools are superior in producing student performance.”

A 2013 book, “The Public School Advantage,” by Christopher A. Lubienski and Sarah Theule Lubienski, describes the results of a look at two huge data sets of student mathematics performance, that found public school students outperform private school ones when adjusted for demographics. Pianta and Ansari refer to this book in this part of the report:
“Although recent studies separating enrollment from length of attendance suggest that the longer lower income students remain enrolled in a private school (at least up to 4 years) the higher the likelihood of accruing substantial benefits, the present report finds that length of enrollment was not associated with student outcomes, once family income was taken into consideration, consistent with other non-experimental work (Lubienski & Lubienski, 2013). For the one-third of the sample enrolled at any time in private school, on average these students attended private schools for 5 to 6 years, which is longer than the most recent follow-up evaluations of voucher programs (Berends & Waddington, in press; Mills & Wolf, 2017). Thus, even for students who remained in private school for almost half of their K-12 experience, there was no discernible association with any of the wide range of outcomes we assessed at age 15.”

Monday, July 23, 2018

What’s Your PhD Story?

Doing a PhD for most people feels like a marathon as it requires intense focus on one topic for a long time.  It’s easy to feel unmotivated at numerous points throughout this journey. Chengcheng Kang tells her story of why she wanted to do a PhD in the first place and how she uses this as her daily source of motivation…

Since I was little I have been told I should do a PhD. During my Master’s studies at St Andrews, I realized that I have to make this choice myself. Whilst this idea actually wasn’t mine at the beginning, I decided to do a PhD anyway. My personality is more creative and untrammeled, so I don’t like sitting in the office, doing similar jobs all day. But I thought if there is a chance that I can improve myself, work with world-class professors and expand the boundary of human knowledge, then that would be awesome! The most convincible reason for me was to leave a footprint in the historical endless flow of mankind. I want a meaningful life, I don’t want to walk past the world like a stranger. I want to make contributions, I wish for my name to be in books, papers, or any literature carrier that can be saved.
The second reason is related to a story. I have a friend who feels very frustrated by the talent in her job — her boss treats people differently based on their degrees. She provides a lot of input but her boss tends to listen more carefully to the feedback from Master’s graduates or PhDs. Rightly or wrongly, knowledgeable people generally receive more respect thus their voices can be heard. This is the second reason for me to keep learning and study a PhD. I want to stand on eminence and become a visionary. When we understand the philosophies and truth behind the phenomenon we study, we are able to identify the real problems at hand and then make differences. I want to change the world, even just a little, to make it a better place.
It doesn’t matter at which stage you are or where you come from to do a PhD, just don’t forget why you chose this path and keep moving on. Learning is the only thing that has no short cut. If the journey is tough, difficult or disappointing, don’t forget to add two words after it: ‘for now’. It might be tough for now, might be difficult for now, but when we work hard and master the skills in our field, the rainbow will show up. So please don’t give up, remember your motivation and believe in yourself!
What’s your PhD story? How do you motivate yourself to keep pushing through tough times? Tweet us at @ResearchEx, email us at libraryblogs@warwick.ac.uk or leave a comment below.
Chengcheng Kang is a PhD candidate from Beijing, China in Group of Information System Management at the Warwick Business School. You may contact her on Twitter at @cckkcc29

One Way to Run Classroom Discussions on Divisive Issues

(Image: Chronicle.com)
by Beckie Supiano, Chronicle of Higher Education: https://www.chronicle.com/article/Running-Class-Discussions-on/243967
In these politically polarized times, it can be difficult to meaningfully discuss a hot-button issue in the classroom — or anywhere else. Rather than considering something new, or even really listening, we’re all inclined to shut out the views of those on the opposite side. After all, we already know what they’re going to say.
Jill DeTemple, an associate professor of religious studies at Southern Methodist University, has encountered this problem in her courses. So she started using an approach called Reflective Structured Dialogue, which was developed by family therapists in Boston decades ago. DeTemple, who responded to our recent newsletter article about what professors can do when outside forces — including current events — push them to rethink the way they teach a course, says she learned about Reflective Structured Dialogue after reconnecting with a college friend whose organization, Essential Partners, developed the method.
While Reflective Structured Dialogue was initially adapted from tools used in family therapy to help the broader public navigate contentious debates, DeTemple quickly saw how the discipline of religious studies could benefit from it. After all, the discipline considers questions on which many students have deeply held positions.
Family therapists describe the problem that often arises in difficult discussions by saying they get “stuck,” DeTemple said. Research shows that humans go into fight-or-flight mode — in which their ability to think critically is compromised — when they feel threatened. The problem: “Your body,” she said, “can’t tell the difference between a viewpoint threat and a bear.”
The challenge for professors, then, is to help students get unstuck from this instinctive response. That’s what Reflective Structured Dialogue is meant to do.
Here’s how it works. The dialogues have a facilitator — the instructor, in a classroom setting — who guides the conversation along pre-agreed lines. Participants are encouraged to reflect before they speak. The approach hinges on the use of “curious questions,” those meant to let the questioner learn from others, rather than to trap them or convince them that they’re wrong. And it’s highly structured, with people taking set turns to speak and doing so under a time limit, and the facilitator following a script.
Difficult conversations often get off to a bad start, DeTemple said, because they begin with everyone arguing their position. Reflective Structured Dialogue opens instead with the facilitator having participants tell a story that has informed it. So to start off a discussion about guns, for instance, students might share their experiences hunting as a child, or describe an act of gun violence that touched their lives. Next, participants talk about the values that underlie these experiences. Then they talk about any ways in which they feel pulled in competing directions on the issue. That third question, DeTemple says, is meant to bring out empathy. Only after working through the three starting prompts do participants start asking each other questions. The goal is not to have anyone switch sides, she said. It’s to help students change the way they relate to one another, to listen and consider different perspectives. Doing so, it turns out, can enrich students’ understanding of difficult content, DeTemple has found, since they have an opportunity to consider it in context.
It probably wouldn’t be practical for a professor to run every class as a full-on dialogue. But the model can come in handy when divisive topics come up in class — either on the syllabus, or unexpectedly. And elements of the approach can inform the way a course runs day in and day out.
Using the model has “spurred deeper and more engaged conversations among my students,” wrote DeTemple in an article written with John Sarrouf, the college friend who introduced her to the method. Students, she continued, “have spontaneously commented in weekly discussion posts and in end‐of‐term evaluations about new abilities to listen, speak about, and appreciate viewpoints and materials that initially made them feel off‐kilter or defensive.”
DeTemple and Sarrouf are now part of a team working to spread and study the approach with a grant from the University of Connecticut Humanities Institute’s Humility and Conviction in Public Life project. The group is teaching faculty members at participating campuses how to use Reflective Structured Dialogue and then interviewing professors and surveying students in their courses to study its impact. There are also plans for a book, DeTemple said, though she adds that nothing helps professors run a dialogue as well as practicing the skill at a workshop.
Still, DeTemple adds, there’s a simple tip at the heart of the model that any instructor could apply: When students get stuck, ask them to tell each other a story.
Learning about this project, in which a system developed by therapists is being used by professors, got me wondering about other pedagogical insights from unexpected sources. Have you ever applied an approach developed for a setting outside of higher education to a challenge in your classroom? Tell me about it at beckie.supiano@chronicle.com and it may appear in a future newsletter.

Monday, July 16, 2018

Why We Should Require All Students to Take Two Philosophy Courses

(Image: iStock)
by Howard Gardner, The Chronicle of Higher Education: 

If I were the czar of higher education that is not explicitly vocational, I would require every undergraduate to study philosophy. And if I were both czar and czarina, I would require all students to take two philosophy courses — one in their first year and another just before graduation.

At first blush, that requirement may seem bizarre, especially coming from me. I am a psychologist and, more broadly, a social scientist — not a philosopher or a humanist. Even more deplorably, I have never taken a philosophy course myself.

But I’ve been thinking about philosophy in recent months because of two developments. A year ago, Mills College eliminated its philosophy major and merged the department into an interdisciplinary unit — just one example of a growing number of institutions that have eliminated majors in certain humanities fields. On a more positive note, in January, the Johns Hopkins University won a $75-million donation to bolster its philosophy department. It occurred to me that a good use of that money would be to design new required courses in philosophy for the benefit of both philosophy departments and undergraduates in general.

Instead, I would call the requirement something like "Big Questions of Life." Every student in their first year of college would choose one course from a list with titles like:The kinds of courses I would require probably wouldn’t even have "philosophy" in the name, although they would all be taught by academics trained in that field. Indeed, except in certain explicitly liberal-arts contexts, I might well avoid the word entirely, since it would frighten some students (and, even more, their parents) and confuse others ("Is this about my personal philosophy?").

  • "Questions of Identity" (Who am I? Who are we?).
  • "Questions of Purpose" (Why are we here? What’s it all for?).
  • "Questions of Virtues and Vices" (What is truth? What is beauty? What is morality?).
  • "Questions of Existence" (What does it mean to be alive, to die, indeed, to be? Or not to be?).
Those are the questions!
Moreover, I would start with the students’ own individual and collective answers to the Big Questions of Life. But — and here is the crucial move — I would not end there.
Instead, I would help students understand that reflective human beings have been asking and answering such questions for millennia, across many cultures and many epochs. Some of the answers those people came up with to the perennial riddles of life have been profound, as indeed have some of the subsequent critiques of their answers.
I want students to appreciate that this conversation over time and across cultures is important and — crucially — that they can and should join in. But they should do so with some humility and respect, building on what has been thought and said before.
There are two powerful reasons for requiring students to start (and end) their education with philosophical questions and thinking. First, scholarly disciplines, however they may have evolved in recent times, began because of human beings’ interest in understanding diverse aspects of their world — ranging from the movement of the stars to the strivings of the soul. A compelling way to understand the spectrum of knowledge is to encounter some of the intriguing ways in which our predecessors thought about those same issues. Second, for most of us, it’s only in late adolescence that we become able to reflect on bodies of knowledge and their relation to one another.
Philosophical ways of reading, thinking, and arguing would constitute good training for four years of college — whether or not the "ph" word is ever uttered.
In Years 2 and 3 of a student’s education, faculty members across the disciplines and at several degrees of sophistication could build on the initial exposure to philosophical thought, contouring it in ways appropriate to their particular courses. Whether you are teaching poetry, psychology, or physics, you should be able to talk about the ideas that originally motivated the practices in your discipline, the ways in which those ideas have remained constant or changed, and how they relate to ideas in other fields, both neighboring and more remote.
To do that, faculty members need not be masters of philosophy, just as a philosopher need not be a master of the other fields. But all professors should be able to — indeed, should want to — provide a context for their field of study. Imagine how inspiring and motivating those conversations could be from course to course, and discipline to discipline.
During an undergraduate’s senior year, philosophical topics and concerns would return as a required course, once again taught by philosophers or philosophically trained scholars. But this time, students would approach the discipline more directly through the use of philosophical texts that deal with timeless as well as contemporary issues — for example, seminal texts on just and unjust wars, human and artificial intelligence, bioethics, the nature of consciousness.
The goal: to equip graduates with a philosophical armamentarium they could draw from — and contribute to — for the rest of their lives.
At Mills College, the loss of the philosophy department and major will decrease the likelihood that students will master the critical ways of thinking that have been the hallmark of philosophical thinking since classical times. It will be far more difficult for students there to understand the origin and development of different lines of scholarship and how they relate to one another. At Johns Hopkins, a generous donation should mean that more graduating students will be armed with powerful cognitive tools that should serve them well in whatever work and leisure pursuits they elect.
It would be disappointing — even tragic — if less-wealthy institutions elected to banish philosophical thinking from their campuses. Leaders of such campuses should, instead, be ingenious in drawing on philosophically trained instructors to inform foundational first-year courses and provide culminating courses of synthesis.
Indeed, in the 19th century, it was customary for the president of a college to provide an overview course at the end of the students’ education. Think of the powerful message that a president would send by advocating required philosophy courses for all incoming and graduating students. Why, that kind of initiative might even attract a multimillion-dollar donation.
Howard Gardner is a professor of cognition and education at the Harvard Graduate School of Education.

The PhD Process: But What About Creativity?


Today’s guest blogger is Steven Thurlow, who is undertaking a doctorate at The University of Melbourne. As part of his studies, he has written about the perceptions of creativity held by PhD candidates in the Arts (see Thurlow, Morton & Choi in The Journal of Second Language Writing). He is currently investigating how Arts academics understand the notion of creativity in doctoral writing, both what it is and where it is found.
It was the last class of our 6-week “creative” writing circle for Arts doctoral writers at the Australian research-intensive university where I work. We had spent each 2-hour class looking at one aspect of creativity – both practical examinations of creativity at the textual/product level and more esoteric discussions about how creativity might be present in doctoral writing processes and practices. The mood was buoyant as the students began taking their leave and heading back to their various disciplinary nooks and crannies.
As she was heading for the door, one of the more enthusiastic participants turned to me. “Gee, Steven, that was a really interesting course and I learned so much.” Then came the body blow: “But I still really have no idea what creativity actually is, and how I can use it in my work.”
Looking at creativity can be disconcerting like that. Spectre-like, it rarely reveals its full shape and form in the academy. But despite a distinctly frosty welcome and even hostility in some quarters, it lingers and lurks in the shadows; in the cracks and crevices of academic discourse; a quixotic beast; a reminder of risk and a beacon of what could be.
Creativity is a term that resists neat definitions, a buzzword that bleeds across academic, professional and self-help contexts. As an explorer of creativity in doctoral writing contexts, I too have struggled with nomenclature. In investigating what it is and why it could be important for doctoral writers, I have tried to stake out some boundaries. In no particular order, the notion of creativity in doctoral writing:
  • tends to have a more practical application in universities and is often used synonymously with terms such as innovation and novelty;
  • is commonly associated with the expenditure of imaginative effort which results in creative content and/or to the idea of creative expression/form (Tardy, 2016);
  • is always subject to expert judgment in the guise of expert readers/examiners of doctoral work.
From the position of creativity researchers such as Csikszentmihalyi (1996) and Tin (2016), creativity springs from a potent mixture of personal/innate characteristics, a product outcome, the process or practice of the creator and a cocktail of other environmental factors. All these forces come together to face an institutional gatekeeper who judges the final thesis document.
So, why is it important for doctoral writers to acknowledge and use creativity in their thesis-writing efforts? One reason connects very explicitly to one crucial ingredient for every successful thesis: originality. Indeed, creativity is often spoken about interchangeably with originality, but they can be very different beasts. From my perspective, to reach the thesis nirvana of true originality, doctoral writers need the spark and inspirational passion that characterise creativity. Despite its cosily symbiotic relationship with originality, creativity is all too often sidelined in the academy. Working with doctoral writers, I have often observed seemingly competent and highly creative students who:
  • inform me they are unable to use specific creative words, structures or approaches in their work, as they are “forbidden”;
  • rarely consider (or have explained to them) the specific processes and practices are needed to complete the complex task of preparing the thesis “book”;
  • often show little interest in delving closely into their writerly “selves”/identities;
  • never explicitly discuss creativity with their supervisors and/or peers.
Many supervisors, too, would appear to view creativity as more of a constraint than an enabler and appear to rarely engage with the concept. However, from my work as both a doctoral writer and doctoral writing teacher, I have found myself more drawn to the idea of practical creativity – specifically, how it could be used to both engage and get our essential message across to readers.
Unfortunately, it appears from my investigations into creativity to date, that any mention of creativity in doctoral writing – apart from those undertaking a creative exegesis – is usually accompanied by a degree of tension. For while I did find some evidence that doctoral writers (especially those in the Arts) considered creativity while writing their theses, the amount and degree of it was adversely affected by feelings of confidence or vulnerability towards their work. Interestingly, it also seems that although most doctoral writers recognised the potential usefulness of learning specific techniques to activate their creativity, several also commented on the need to unlearn previous “blocking” information about creativity in academic writing that had been previously taught to them.
All in all, it seems we have some way to go before creativity is enthusiastically accepted as a liberating and powerful force for thesis writers and, indeed, for doctoral education.
Csikszentmihalyi, M. (1996). Creativity: Flow and the psychology of discovery and invention. New York: Harper Collins.
Tardy, C. (2016). Beyond convention: Genre innovation in academic writing. Ann Arbor, Michigan): University of Michigan Press.
Tin, T.B. (2016). Creativity in second-language learning. In Jones, R. (Ed.) (2016) The Routledge handbook of language and creativity. (pp. 433-448). London; New York: Routledge.

Wednesday, July 11, 2018

Self-Care for the PhD Student

by amhcollective, Academic Mental Health Collective: https://amhcollective.com/2016/11/21/self-care-for-the-phd-student/
This blog post was written by a member of the AMHC Admin team.
You may have seen those lists on self-care circulating on social media: They advise us to take up running, brew green tea, and write in a gratitude journal, all before 7:00 in the morning.

Brightly coloured pineapples and miniature pumpkins sit on a white desk beside an open silver laptop with a black keyboard.
For me, even contemplating a longer to-do list is tiring, so I hesitated at first to take on the topic of self-care this week. But understanding how to foster and maintain wellbeing is a relevant conversation for academics, especially as November rolls around. While the Academic Mental Health Collective often speaks to the challenges of mental illness, today’s post is directed more generally to graduate students, and the everyday challenges that can arise amid the demands of graduate study.
As a researcher in the humanities, and more specifically in literature, I decided to broach the topic of self-care by looking at its uses and its origins. My questions were simple: How is the term self-care appearing in popular discourse? How does that compare to the origins of self-care, and how can this help us reframe the notion of self-care?
I turned to Pinterest and Google for a quick look at the popular uses of self-care. Pinterest showed lists both daunting (100 Things To Do For Improved Self-Care) and doable (12 Self-Care Ideas That Take 5 Minutes Or Less). Google suggested a TedTalk, along with post-election articles on self-care by American news websites. Then there was a spa in Arizona offering a weekend of self-care for $2000.
These headlines assume that self-care is a virtuous behaviour, and that we know we should be practicing it. If we feel guilty about avoiding self-care, that makes us susceptible to self-care hacks and advertising. Pinterest is giving us to-do lists, but if we have the resources, we can skip the work and buy it instead, by spending a weekend at an Arizona spa.
Various magazine articles, in print and online, have been parsing the 21st-century discourse of self-care. In “What Does ‘Self-Care’ Really Mean?”, Jennifer Pan reminds us that talking about self-care means talking about politics, labour, and privilege. (Gwyneth Paltrow famously recommended popping over to France when necessary.) Ester Bloom’s article, “How ‘Treat Yourself’ Became a Capitalist Command,” investigates how self-care has been co-opted by advertising to serve corporate interests.
The modern take on self-care bears little resemblance to the way it was meant to be practiced in the ancient world. In the 1970s and 1980s, Pierre Hadot and Michel Foucault began studying care of the self in ancient Greece and Rome. Their work showed that self-care was connected to the quest for knowledge. Spiritual exercises were meant to form the self, and they focused on developing attention, self-mastery, and memory. More importantly, these practices were not meant to be solitary. Care of the self was to be practiced in community. Even the solitary spiritual exercises occurred in the context of relationships.
Given that self-care was originally rooted in the quest for knowledge, and that it has been appropriated more recently by capitalist interests, I figured that rather than writing a prescriptive list, I would invite you to join me in reflecting on my own habits.

On a sandy beach there is a purple shovel with a wood handle sticking up. In the background is the water and grey clouds.

How do I care for myself as a graduate student?

What do I do to care for my physical, emotional, and mental wellbeing? What sorts of habits and practices are helpful, and how are they embedded in relationships and community? I don’t normally think about my daily activities in terms of self-care, but when I reflect on how I cope with a demanding PhD program, a few things come to mind.

I love my work as a teacher and researcher, and I put a lot of energy into it. During a semester when I’m teaching, it’s not unusual for me to fall asleep spontaneously at 10pm, even if I’m at a party (I can give references for that!). Since my body actually decides my bedtime for me, it’s a no-brainer to get eight hours of sleep a night. Because I have this physical constraint, I really do believe it’s possible to allow ourselves adequate sleep while doing doctoral work.

Choosing my small talk
Some people talk a lot about deadlines and stress, and that can be a great way to bond with colleagues. But, I find myself gravitating toward people like my friends F and G, who are serious, focused thinkers whom I admire. When I run into F, she often tells me about how she’s been keeping up with the Kardashians and reading comic books. G typically begins a conversation by asking what films, books, or exhibitions I’ve seen recently. Not being ashamed of the fun things we do is a great way to counter the work-first culture that Kristen Ghodsee wrote about recently. Hearing about others’ fun also reminds me that there are many ways to enjoy life on a university campus. If there are author talks, a climbing wall, and a pottery studio within two kilometers of our workspace, why would we even want to drag out our work hours by scrolling Facebook? Spinning work narratives for our colleagues every day makes us spend more time “working.” Talking about fun reminds us to go and have some fun.

I wouldn’t call myself an athlete, but exercise entered my vocabulary during my first semester of grad school. I don’t really hold myself to a schedule, but when I feel a bit stuck or distracted in the library, I’ve learned to recognize that as a sign that I need to move. Sometimes I swim 40 laps, other times I get on my bike and do a couple errands around town. A German woman once told me that no matter how blue she feels, she always feels a little happier after biking! When I moved to Paris, I challenged myself to not use public transit at all, except for big trips outside the city. This decision made exercising automatic: I had to bike a couple kilometers even to do sedentary things, like work in the library and meet my friends.

Not working
Flexibility is one of the perks of humanities research: you can work where and when you want to. I do not take advantage of this flexibility. Even though I don’t have office space or obligatory hours of work, I go to work from Monday to Friday. I work at the library or the café, and I arrive by 9am at the latest. I sometimes meet friends for lunch, or for a mid-afternoon coffee. Once 6pm rolls around, I allow myself to close my laptop, pack up and go home to my partner, who works on a real office schedule. Sometimes we open our computers after dinner, to answer some emails, read some articles, or in his case, work on developing an app. Some evenings I try to go screen-free, because reading a novel provides a better break from my work than reading articles online.

Building a work community
As a humanities scholar, I knew that isolation would be a struggle while writing my dissertation. I’ve found various ways to build social interaction into my work week. I’ve started organizing project-centered work groups, because meeting with colleagues is lower-stakes than meeting with professors, and social meetings are a great reward for solitary work. I put together a reading group to read a difficult French theorist; an article writing group that met weekly for 12 weeks using Wendy Belcher’s book; virtual writing groups with weekly Skype meetings; and a writing group that met on weekday mornings to work side by side for four hours in the library. Because my field doesn’t automatically give me officemates, I try to create spaces for that type of relationship. My writing buddies empathize with problems, ask good questions, and they make me laugh. Plus, they are more generous readers than I am of my own work!

Non-Work Community
Work communities are good in an end-focused sort of way; they help me keep up my writing momentum. There’s another sort of community that I’ve found refreshing, and that’s the group of people who are in no way connected to my work. In my case, that’s a faith community, but I could imagine this transfers to any sort of affiliation group (sports, activism, charity work, hiking, politics, even a book group). Spending time and developing friendships with people in different professions and seasons of life reminds me that although academic work is exciting, absorbing, and difficult, it only constitutes one sector of human experience. (Being on-call for a friend who’s about to give birth really puts a dissertation into perspective!)

When I considered doing a PhD, my professor gave me The Talk. It went something like this: the job market is terrible, the economy is not what it used to be, and nowadays you can’t expect a tenure-track job as a reward for writing a dissertation. If you want a comfortable life with a house in Montreal, go to law school. But if you really want to learn, and you’re okay with driving a taxi to pay the bills, by all means, go do a PhD!
That framing has stayed with me, and I let myself imagine working in various jobs. I imagine being a kindergarten teacher, a barista, a writer, a server, an entrepreneur, a bookstore worker, or an administrator in the university. Although I would very much like to be a tenured professor, I always try to imagine multiple futures. I do this for two very practical reasons. First, I research life narratives, and I’ve come to believe that our imagination shapes our reality. Second, being committed to one future plan is too stressful. I am able to write a dissertation. I am able to read stacks of books. I am able to teach intense courses. I am not able to do all this while believing that I must land a tenure-track job the moment I graduate. So I do the first three things (write, read, teach) and reject the fourth task (maintaining stifling beliefs). This is liberating, and I like to think it will help me make the alt-ac switch if necessary.

cosy mug and book
A pastel green mug sits in the foreground on top of an open book and a soft-looking blanket.

Thinking about how I cope with the demands of grad school has shown me that I do actually engage in practices that could be considered self-care:
  1. Physical health: sleeping and exercising
  2. Leisure: I schedule fun activities with my partner and friends in advance, and I talk openly about my leisure activities to encourage a fun-loving culture among my peers
  3. Putting boundaries on work: devoting certain hours and spaces to work, and allowing myself to shift gears at the end of the day
  4. Community: maintaining various types of relationships in which I am giving and receiving (i.e., my relationship with my partner, supportive professional relationships, diverse types of friendships)
  5. Mental exercises: actively imagining myself doing other jobs to alleviate the pressure of the job market
I try to avoid indulgent behaviours that disguise themselves as self-care. Sometimes I respond to a tough problem in my work with so-called retail therapy at the bookstore, aimless internet browsing, or a slice of cake at a fancy café. If, on the other hand, I invite a colleague to join me for cake, we can often work through the problem in ten minutes, and we leave energized from our coffee break!
Self-care opposes the logic of capitalism, which benefits from increasing consumption and employee output. Instead of openly opposing self-care, corporations have made the term work for them. Self-care sells products and services, and it is mobilized in the workplace to shift the burden of care from the employer to the employee. Companies are persuaded to sponsor lunchtime meditation programs when it promises a decrease in sick days. But if self-care simply means consuming more and turning our bodies into optimized labour machines, it is an empty concept.
*          *         *
Taking self-care seriously has radical implications: putting our relationships, health, and wellness before our professional ambitions and obligations. For me, that means regularly considering taking breaks from my academic life, if my relationships or my health is at risk.
One of my friends, K, told me that she was putting off a long-overdue doctor’s appointment, because she feels badly using the time when her child is in daycare to access medical care for herself. Of course, as soon as she put that into words, she realized that she had been compromising her health for productivity, and she booked the appointment (and I offered to watch her toddler for an hour). If you’re at some sort of roadblock, I encourage you to text a friend with a request: reading a draft of your chapter, giving you a ride to the dentist, or just sharing coffee to talk about what’s challenging this semester. In my experience, people love to help, and asking is the hardest part!
Reflecting about our own self-care in conversation with our friends can certainly help us to map patterns in our behaviour and evaluate how they impact to our wellbeing.
I’ve shared my own reflections here, not as an expert, but as a conversation starter. Honesty about our self-care practices can offer perspective to our peers, and help younger colleagues who may be particularly vulnerable to the pressure to be productive all the time. How do you care for your self, and who helps you with your self-care? Which habits are serving you well, and which practices might you let go?