Posted on

Generative AI from a Writing Coach’s Perspective

By Dave Harris

Following last week’s conversation circle, I was thinking about the fear that AI may take our jobs. Professionally, as a writing coach, this struck me because fear is one of the great dangers to a writer. Personally, however, I’m not interested in using AI as a tool because I have my own ideas to explore and writing, teaching, and research all help me develop those ideas. My internal curiosity won’t disappear if some AI takes my job, nor will my need to find good ways to occupy my time. For those reasons, I will continue my work as a writing coach, teacher, and researcher.

If you have fears about AI taking your job, I want to encourage you to keep writing, teaching, and researching because it’s worth it. It’s worth it to keep writing because:

  • AI is a long way from developing sophisticated understanding of the issues important to humans
  • AI is a long way from being able to teach human experience and judgement effectively
  • Writing/Teaching/Research, done right, is an uplifting personal experience

The third is the most important, so I will start there.

Writing/Teaching/Research is an uplifting experience

One of the oldest philosophical debates is what it means to have a good life. I don’t have a certain answer, but I do think that having a good life is a goal for myself, for those I interact with as coach, teacher, or consultant, and for everyone in the human community. And I believe, following the psychologist Mihaly Csikszentmihalyi, author of Flow, that the best moments in our lives are when we are challenged and engaged in a difficult, worthwhile task.  The attempt to understand the world—through research, teaching, and writing—is one such difficult, worthwhile task. Indeed, it is no surprise to me that one general term applied to learning—“philosophy”—has etymological derivation from the love “philo-” of wisdom “sophos.”

Writing, teaching, and research are all, to me, different facets of the same effort to understand the world and to share that understanding with others. Perhaps for some people the desire to understand the world is separate from the desire to share that understanding, but in my experience, solitary work is not enough—real insights grow from trying to incorporate multiple different perspectives.

The work of a researcher/teacher can be frustrating and difficult. Both the effort to find answers that make sense and then the effort to explain those ideas to others are often difficult. But when it works—when ideas coalesce into a new understanding of the world, when you find the right expression, when you help someone else find their own epiphanies and understanding—then it is uplifting. It feels good personally, and it gives some hope of larger benefits to self and others.

Knowing the potential for this kind of positive experience is motivation for each of us to personally continue to pursue research, teaching, and writing, even if some AI takes our job and our understanding of the world doesn’t have any external value.

AI is a long way from developing understanding of the issues important to humans

Of course, developing understanding often does have external value—understanding physiology allows medical treatments; understanding psychology allows more effective pedagogy and psychotherapy; understanding engineering allows safer buildings and infrastructure. New understandings and technologies can be dangerous as well as beneficial, and the full implications of new ideas do not become immediately obvious. This uncertainty and complexity might lead us to seek an AI that can give answers to such questions, but really important issues—questions of life and meaning and value—are outside the scope of current AI.  Perhaps the most important—what a person should do with their own life—I have already touched on in the previous section, and argued that research, teaching, and writing were good options for activities that will lead to a good life.

But the vast world of real human problems—of how to create a just society, of how to save the world from climate change, of how to achieve spiritual growth—whatever humans want to create or realize—all of these problems are related to creating good lives for ourselves and others. The engineer who builds safe bridges is helping other people have better lives, as well as engaging in a livelihood about which they are personally enthusiastic.

But these problems are complex. That bridge might help many make safe journeys while also disrupting a local ecosystem that benefitted many people. Complex problems with some good and some bad in each possible solution are the norm, and they can only be answered with respect to human values and human experience. The designer or planner or policy maker offering a plan must have some empathetic understanding of what people would consider good and bad, and that understanding needs to founded in real empathy for the experiences of humans.

When Hamlet ponders “to be or not to be,” he is struggling with human experience in a way that many humans can understand from their own personal experience. Macbeth laments “tomorrow and tomorrow and tomorrow, plods in this petty pace from day to day,” lamenting the difficulty of life, and later in the monologue, the confusing nature of experience.  A computer AI may be able to analyze or process these concepts and ideas, but it does not experience it in the way that a human does.

Human moral dilemmas and conflicts of interest are distressing, often because of the significance of both sides of the issue. Computer AIs do not feel this conflict, nor do they struggle with the pain of empathy for those oppressed or subject to unjust violence.  For these reasons, I would not want a computer AI to make decisions for me. Perhaps I won’t make an optimal decision (optimal as determined in comparison to some set of “objective” criteria), but I will make a human decision that grows out of human knowledge and experience.

AI is a long way from being able to teach human experience and judgement effectively

Because AI cannot understand human experience or important human problems, it cannot hope to teach people to negotiate these problems.  Sure, an AI can follow a script that might be very educational.  There is a lot that people can learn from following a script, unguided by any human teacher. But the complexity of human experience is beyond current AI.

As a writing coach, part of what I do is repeat many common recommendations—“write every day;” “practice writing abstract;” “use the pomodoro technique (write for 15 minute chunks of time);” “set up a good writing space;” etc.—but I do not offer such recommendations at random, rather they grow out of both (1) a coherent sense of how to develop a healthy writing practice, and (2) a sensitivity to the concerns of the individual writer. The empathy to understand a unique individual is beyond an AI, as is the ability to build an empathetic connection. “Write every day,” might be a lousy recommendation to a writer who is spending hours every day staring at a blank page while “writing.”

Beyond the need to be sensitive to context, there is the very real dimension of human interaction, which is a neurophysiological experience. Human interactions trigger neurological reactions, such as the whole system of mirror neurons. The teacher-student relationship is not just a matter of passing on a bunch of facts; it is about helping students create ways of acting that are based on knowledge and experience.

Conclusion

The question of AI is a philosophical, practical, moral, and ethical thicket far beyond the scope of a brief essay. I could write thousands of words on the nature of problems and whether we can ever find the “right” answer. For example, I could discuss the idea of “wicked problems”—problems that have no definitive formulation, nor any “right answer.” But this essay is long enough already.

When it comes to the question of whether AI might take your job, I would say that you benefit by pursuing your own interests and motivations and trusting your inherent intelligence, experience, and wisdom. The important point, it seems to me, is that you, the scholar and teacher, can lead a better life and have more success as researcher, teacher, and writer by pursuing your interests, by trying to learn and by trying to teach (both through writing and speech).


Dave Harris, Ph.D., editor, writing coach, and dissertation coach, helps writers develop effective writing practices, express their ideas clearly, and finish their projects. He is author of Getting the Best of Your Dissertation (Thought Clearing, 2015), Literature Review and Research Design: A Guide to Effective Research Practice (Routledge, 2020) and second author with Jean-Pierre Protzen of The Universe of Design: Horst Rittel’s Theories of Design and Planning (Routledge, 2010).Dave can be found on the web at www.thoughtclearing.com

Share your thoughts