Learning Styles and Physics (or: Embracing Uncertainty)

Being back in the classroom as a student has given me lots of opportunities to reflect on different learning styles. Or, perhaps, more accurately, on my own learning style.

I tend to give my undergraduate field of physics a lot of credit in developing my academic style –  though, I suppose, it’s equally possible that this happened the other way around: that my initial learning style attracted me to physics in the first place.

But, regardless of the order of these items, I find that I am deeply comfortable with a high level of uncertainty in my learning process.

You can see, perhaps, why I think I may have gotten that from physics. Physics is complex, and messy, and, of course, deeply uncertain.

Most importantly, this uncertainty isn’t a mark of incompleteness or failure. Rather, the uncertainty is an inherent, integral part of the system. There is no Truth, only collections of probabilities.

It’s a feature, not a bug.

I’ve noticed myself frequently taking this approach while learning. I’m taking a fantastic Computer Science class right now for which I would be tempted to flippantly say that I have no idea what is going on.

Like Schrödinger’s cat, that statement is both true an untrue. Until observed directly, it is caught miraculously, simultaneously, equally, in both states.

I have no idea what is going on, but I’m totally keeping up.

And I don’t think it’s simply a matter of confidence – my inability to articulate at which extreme I lie isn’t just a problem of trusting my own talent in this area. While, of course, it’s impossible to fully disambiguate the two, it honestly feels most accurate to embrace both states: I have no idea what is going on, but I am totally keeping up.

While I have only a passing familiarity with the works of pedagogical theory, I don’t recall ever hearing anyone describe education in this way. (Please send me your resources if you have!).

I used to think of learning as an incremental, deliberate process – like climbing a latter or building a staircase. Each step of knowledge brought you a little closer to understanding.

Perhaps this is just the difference of being in a Ph.D. program, but I’ve come to rather think of learning as this:

Knowledge is a hazy, uncertain cloud. The process of learning isn’t simply building “towards” something, but rather it’s the process of coalescing and clarifying that cloud. It’s about feeling around for the edges; finding the shapes and patterns hidden within.

Someone told me recently that physics can learn anything. I don’t know if that’s true, but I do think that there’s something to accepting this state of uncertainty. To be comfortable being lost in foggy haze that you can neither articulate nor truly understand…but to stand in that cloud and find the patience to slowly, incrementally, find meaning in the noise –

Like bring a picture into focus.

facebooktwittergoogle_plusredditlinkedintumblrmail

Cyborgs and Aesthetics

In countless dystopic stories, such as more recent films of the Terminator franchise, robot-controlled future Earth is a post-apocalyptic hellscape.

In many ways, this imagery seem intuitive. Indeed, a world in which humanity has been pushed to the brink of destruction by robots bent on the eradication of mankind, seems like it ought to be rather bleak.

But I wonder – how much of this imagery is driven by our own sense of self-importance? Or rather – why don’t cyborgs care about aesthetics?

To be fair, there are a number of reasons why such an assumption might be reasonable. If nuclear weapons were unleashed during the early days of the robot uprising, for example, there would surely be terrible repercussions.

But which side, do you suppose, would turn to nuclear weapons first?  Would cyborgs, bent on destroying the petty beings that created them, move to eradicate humanity using our most deadly weapons?

Or would humanity, terrified of the powerful beings we created, move to destroy them before they destroy us?

I’m rather convinced that it is humanity which would do the nuking.

More generally, there’s the sense that robots, dedicated beings of practicality and efficiency, would gladly sacrifice aesthetics to advance the end they are programed seek. The future is a post-apocalyptic hellscape because, to a robot, it hardly matters whether the environment is a hellscape or not.

I’m not convinced of that either. Are aesthetics, indeed, aspects of pure fancy with no practical connotations? A clear day is not only a beautiful sight to behold, it is important for the lungs; no matter how ‘indestructible’ a cyborg may be, exposure to nuclear radiation – or shielding thereof – is likely to be costly.

And, of course, is there no value in beauty in itself? It’s easier, perhaps, to think that cyborgs wouldn’t have capacity for such appreciation – that the awe of the universe is something humans can uniquely behold.

Yet, isn’t that the very aspect of consciousness? The very moment when intelligent becomes intelligence?

Perhaps that moment when a computer becomes alive, when it thinks for itself, “I am,” perhaps that, too, is the moment it realizes – this is a remarkable world we live in.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Oxford Comma

There is a topic which has caused generations of debate. Lines have been drawn. Enemies have been made.

I refer, of course, to the Oxford comma. Should it, or should it not, be a thing?

For those who don’t bask in the depths of English grammar debates, let me explain. The Oxford English Dictionary, a worthy source of knowledge on this subject, defines the Oxford comma as:
a comma immediately preceding the conjunction in a list of items.

“I bought apples, pears, and grapes” employs the Oxford comma while “I bought apples, pears and grapes” does not.

You can see why there are such heated debates about this.

The Oxford comma , so named due to “the preferred use of such a comma to avoid ambiguity in the house style of Oxford University Press,” is also known by the more prosaic name of the “serial comma.”

I have no evidence to verify this, but I believe that what one calls the comma gives insight into a person’s position on the matter. Those who are pro-comma prefer the more erudite “Oxford comma” while those who are anti-comma prefer the uninspiring “serial comma.”

Why do you need another comma? They ask. You already have so many, you don’t need a serial comma as well!

These people are wrong.

As I may have given away from my own references to the “Oxford comma,” I am firmly in the pro-Oxford comma camp.

It is clear that a comma is better there.

Not only because there’s no end to the silly and clever memes you can create mocking the absence of an Oxford comma, but because – more proudly – a sentence just feels more complete, more balanced, and more aesthetic with the comma there. It just feels right.

But, of course, this is what makes language so wonderful. Language is alive, and that life can be seen in all the little debates and inconsistencies of our grammar.

It’s like cheering for your favorite sports team: we can fight about it, mock each other, and talk all sorts of trash, but at the end of the day we can still be friends.

…Wait, we can still be friends, right?

facebooktwittergoogle_plusredditlinkedintumblrmail

Can You Become “A Morning Person”?

Someone asked me to write a post about becoming a morning person. Based, I suppose, on my expertise of frequently getting up in the morning.

I was skeptical – can one actually become a morning person? What does that even mean?

I suppose it’s no surprise that there are already countless articles on the subject – apparently, it only takes five minutes to become a morning person. Or, if you prefer, here are 19 Ways to Trick Yourself Into Becoming a Morning Person. (My favorite tip: “nap cautiously”). If you’re looking for a somewhat more legitimate news source, here’s a Times Magazine op-ed wistfully entitled, “How I became a morning person.”

I am still skeptical.

All these self-help articles are written in the blasé tone commonly found in fat-shaming weight loss articles. If you want to lose weight, eat less. If you want to be a morning person…just get up in the morning.

This advice does not seem that helpful.

For one thing, sleep habits are – at least in part – biologically determined. In one 2013 study, researchers used the standard Munich Chronotype Questionnaire to sort participants into “morning” and “night” type people.  They then studied melatonin and saliva samples of the participants, finding the the difference in circadian rhythms could be “detected at the molecular clockwork level.”

I am certainly reaching far beyond my areas of expertise, but it seems as though there is sufficient evidence for the conclusion that it is unproductive to simply tell a night owl to try harder to get up in the morning.

To be compound the matter, there is some evidence to suggest that “misalignment of circadian and social time may be a risk factor for developing depression” – eg, that “night owls,” whose preferred timing is disconnected from what is generally socially acceptable – are at higher risk of depression.

To be clear, chronotype is not a binary state. On the whole, a population may skew towards early or late, but diurnal preferences are a distribution for which most people fall in the middle. So those individuals glibly writing guides for how they became morning people were most likely not particularly night people to begin with.

If you really want to be a morning person, it seems reasonable to give it a try…but if it really doesn’t work for you, it may be best try finding a lifestyle that better supports your given sleep preferences.

So, I guess I don’t have very good advice.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Downside of Matrix-Style Knowledge Imports

Sometimes it seems as though it would be most convenient to be able to matrix-style download knowledge into my head. You know – plug yourself into a computer, run a quick program, and suddenly, it’s all, “whoa, I know Kung Fu.

Imagine for a moment that such technology was available. Imagine that instead of spending the next several years in a Ph.D. program, I could download and install everything I needed in minutes. What would that look like?

First of all, either everyone would suddenly know everything or – more likely, perhaps – inequality would be sharpened by the restriction of knowledge to those of the highest social strata.

It seems optimistic to imagine that knowledge would become free.

But, for the moment I’ll put aside musings about the social implications. What I really want to know is – what would such technology mean for learning?

I suppose it’s a bit of a philosophical question – would the ability to download knowledge obliterate learning or bring it to new heights?

I’m inclined to take such technology as the antithesis of learning. I mean that here with no value assumptions, but rather as a matter of semantics. Learning is process, not a net measure of knowledge. Acquiring knowledge instantaneously thus virtually eliminates the process we call learning.

That seems like it may be a worthy sacrifice, though. Exchanging a little process to acquire vast quantities of human knowledge in the blink of an eye may be a fair trade.

All this, of course, assumes that knowledge is little more than a vast quantity of data. Perhaps more than a collection of facts, but still capable of being condensed down into a complex series of statistics.

There’s this funny, thing, though – that is arguably not how knowledge works. At it’s simplest, this can be seen as the wistful claim that it’s not the destination, its the journey. But more profoundly –

Last week, the podcast The Infinite Monkey Cage had a show on artificial intelligence. While discussing the topic guest Alan Winfield made the startling observation: in the early days of AI, we took for granted that a computer would find easy the same tasks that a person finds easy, and that, similarly, a computer would have difficulty with the same tasks a person finds difficult.

Playing chess, for example, takes quite a bit of human skill to do well, so it seemed like an appropriate challenge.

But for a computer, which can quickly store and analyze many possible moves and outcomes – playing chess is relatively easy. On the other hand, recognizing sarcasm in a human-produced sentence is nearly impossible. Indeed, this is one of the challenges of computer science today.

All this is relevant to the concept of learning and matrix-downloads because the groundbreaking area of artificial intelligence is machine learning – algorithms that help a computer learn from initial data to make predictions about future data.

The idea of downloadable knowledge implies that such learning is unnecessary – we only need a massive input of all available data to make sense of it all. But a deeper understanding of knowledge and computing reveals that – not only is such technology unlikely to emerge any time soon, it is not really how computers work, either.

There is something ineffable about learning, about the process of figuring things out and trying again and again and again. To say the process is invaluable is not merely some human platitude, it is a subtle point about the nature of knowledge.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Hollow Men

In 1925, T.S. Eliot, already an established and respected poet, published The Hollow Men.

It was a transitional time the author. Two years later, Eliot – who had been born to a prominent Missouri family and raised in the Unitarian church – would convert to Anglicanism and take British citizenship. A conversion which is reflected in his 1930 poem, Ash Wednesday.

He was in an unhappy marriage. In his sixties, Eliot confessed in a private letter, “To her, the marriage brought no happiness. To me, it brought the state of mind out of which came The Waste Land.”

Eliot had composed the epic poem largely while on three months enforced bedrest following a nervous breakdown.

It was in that state of mind – post-Waste, as Eliot later described it, yet without the peace he found later in life – that Eliot wrote The Hollow Men.

We are the hollow men
We are the stuffed men
Leaning together
Headpiece filled with straw. Alas!

The poem is full of allusions to hollow men – Guy Fawkes of the infamous Gunpowder treason and plot; Colonel Kurtz, the self-professed demigod of Joseph Conrad’s Heart of Darkness; Brutus, Cassius, and other men who conspire to take down Julius Caesar; and the many cursed shades who call to Dante as he travels through the afterlife in The Divine Comedy.

Our dried voices, when
We whisper together
Are quiet and meaningless
As wind in dry grass
Or rats’ feet over broken glass
In our dry cellar

Ultimately, these hollow men, full of veracity and determination in life; worshipped, perhaps, as gods among men, are nothing. They are only the hollow men, the stuffed men.

In Purgatory, Dante finds such hollow men. “These shades never made a choice regarding their spiritual state during life (neither following nor rebelling against God) instead living solely for themselves.  Neither heaven nor hell will let them past its gates.”

They are remembered – if at all – not as lost,

Violent souls, but only 
As the hollow men
The stuffed men.

There is an in-betweenness to their existence, a nothingness far worse than the tortuous circles of hell. They are shape without form; gesture without motion. They are paused, eternally, in that inhalation of oblivion.

Between the idea
And the reality
Between the motion
And the act
Falls the Shadow

Perhaps we are all hollow men. Perhaps we are all doomed to that empty pause. Perhaps, we, like Fawkes, will be found – seemingly on the eve of our victory – standing guard over our greatest work not yet accomplished.

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

Or, perhaps not. Perhaps, as Eliot did himself, we can find new beginnings out of our quiet darkness. As Eliot writes in Ash Wednesday:

Because I do not hope to turn again
Because I do not hope
Because I do not hope to turn
Desiring this man’s gift and that man’s scope
I no longer strive to strive towards such things
(Why should the aged eagle stretch its wings?)
Why should I mourn
The vanished power of the usual reign?

facebooktwittergoogle_plusredditlinkedintumblrmail

Reflections on a First Semester

As my first semester comes to a close, I’ve been reflecting a lot on my experiences over the past few months.

I have learned so much – though the act of trying to enumerate just what I’ve learned seems far too daunting for today. Learning is a funny thing, you know. The growth that comes from learning is far more than the accumulation of facts. It’s a subtle process that involves slowly acquiring not only facts, but ways of thinking and approaching problems.

David Williamson Shaffer of the University of Wisconsin-Madison writes about this a lot in his work on epistemic frames. Building on the concept of “communities of practices” – spaces where people within a given field share similar approaches – Shaffer describes epistemic frames as “the ways of knowing with associated with particular communities of practice. These frames have a basis in content knowledge, interest, identity, and associated practices, but epistemic frames are more than merely collections of facts, interests, affiliations, and activities…knowing where to begin looking and asking questions, knowing what constitutes appropriate evidence to consider or information to assess, knowing how to go about gathering that evidence, and knowing when to draw a conclusion and/or move on to a different issue.”

So, essentially, over this first semester, I have been learning how to see the world through a particular epistemic frame: learning what questions to ask and what tools to deploy in answering them.

There is, of course, still so much to learn, but I’m walking away from this first semester with critical thinking skills that will serve me well in the years to come.

More important than the facts I studied or the equations I learned, was the constant challenge: what does this mean?

It is not enough to know how to write a program or how to call a function that will do all the hard work for you. (Okay, I’m still learning to do that!) It is great to be able to do those things, but those skills are only valuable if you know what it means – if you understand how the calculation is done and can properly interpret the results. So, that is what I have learned this semester: I have learned to think critically, to question my own intuition as well as the equations that are put in front of me.

And, of course, I have had a ton of fun.

facebooktwittergoogle_plusredditlinkedintumblrmail

Flow

In positive psychology, there is a concept called “flow” which was created by University of Chicago psychologist Mihaly Csikszentmihalyi. Flow is a state of deep focus, known colloquially as being “in the zone.”

More precisely, Csikszentmihalyi identifies flow by asking, “Do you ever get involved in something so deeply that nothing else seems to matter and you lose track of time?”

Now, being something of a skeptic and contrarian, I’m automatically suspicious of anything that has “positive” in the title. And somewhat similarly, when I first heard about “flow” I thought it was ridiculous. I’m used to the hectic world of working life: managing more tasks than are humanly possible to complete while people constantly interrupt with questions. It’s not orderly, but it’s still possible to get a lot done and finish the day with one’s sanity intact.

I’ve been having a different experience since I started school. I certainly have plenty of work to do, but there are fewer interruptions. I come in, start my work, and don’t move again for hours. I’ve gotten out of the habit of constantly tabbing over to Facebook or email – at the end of the day, I find I have a lot of catching up with the outside world to do.

I have almost missed class or nearly forgotten to go home because I’m so focused on what I’m working on.

I guess this is flow.

Csikszentmihalyi makes the bold claim that “it is the full involvement of flow, rather than happiness, that makes for excellence in life.” I’m hardly ready to go that far, but it’s certainly an interesting state. And there’s something particularly satisfying about accomplishing a task from a state of flow.

But the state is not without it’s drawbacks. Most obvious are the possible health effects: I’ve got from a life of hectic running around to one of entirely sitting. But more fundamentally, I’m not sure I’m entirely comfortable with the idea of losing time. I don’t want time to simply slip passed me while I focus on my work: I’d rather be aware of each moment.

facebooktwittergoogle_plusredditlinkedintumblrmail

Han Shot First

In on the eve of being honored by the Kennedy Center, legendary filmmaker George Lucas sat down with the Washington Post to reflect on his remarkable career. Among other things, Lucas used the opportunity to defend one of his most controversial decisions: editing a scene in the remastered Star Wars to make it appear that Greedo shot first.

The Washington Post explains:

[Lucas] went back to some scenes that had always bothered him, particularly in the 1977 film: When Han Solo (Harrison Ford) is threatened by Greedo, a bounty hunter working for the sluglike gangster Jabba the Hutt, Han reaches for his blaster and shoots Greedo by surprise underneath a cantina table. In the new version, it is Greedo who shoots first, by a split second. 

Lucas justifies the change:

“Han Solo was going to marry Leia, and you look back and say, ‘Should he be a cold-blooded killer?’ ” Lucas asks. “Because I was thinking mythologically — should he be a cowboy, should he be John Wayne? And I said, ‘Yeah, he should be John Wayne.’ And when you’re John Wayne, you don’t shoot people [first] — you let them have the first shot. It’s a mythological reality that we hope our society pays attention to.”

The Washington Post points out that Lucas is “a passionate defender of an artist’s right to go back and tweak his work.” In some ways that seems fair – yet art is not created solely by the artist. Art, nearly by definition, is a shared experience; the interpretation of art is a creative act in itself.

So, yes, perhaps Lucas has the right to re-edit his films and add terrible CGI characters. But we, too, have the right to rebel. To hold our interpretations valid.

And CGI monstrosities aside, there is something particularly problematic about the narrative change of “Greedo shot first.”

Lucas himself almost gets at it in his defense: we wish Han hadn’t shot first.

Han is a good guy. We like him. He’s a little rough around the edges, maybe, but beneath his gruff exterior, we know he is a hero. And heroes always do the right thing.

Lucas sees that as a reason to re-write history. A true hero wouldn’t shoot first, therefore a true hero didn’t shoot first.

But that is exactly why it is so important to acknowledge that Han shot first. Maybe we wish he hadn’t, maybe we want him to be so upstanding that no matter how dicey the situation he always gives others the benefit of the doubt. But no matter how much we wish that to be the way the world is, we all know the truth:

Han shot first.

Perhaps it wasn’t the ideal thing to do. Perhaps it represents a moral lapse in his character. But it’s who he is, and it doesn’t diminish his capacity to be a hero.

All of us have made mistakes in our lives. All of us have moments we are not proud of. All of us wish we could exploit our “artist’s rights” to go back and edit our darker moments, to remake ourselves more like the heroes we wish we could be. But, of course, none of us really have that luxury.

The truth is, Han shot first. But that doesn’t make him less of a hero. He can still save the day and he can still get the girl.

Han shot first, but we are all capable of redemption.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Interdisciplinarity

When I started my Ph.D. program somebody warned me that being an interdisciplinary scholar is not a synonym for being mediocre at many things. Rather, choosing an interdisciplinary path means having to work just has hard as your disciplinary colleagues, but doing this equally well across multiple disciplines.

I suspect that comment doesn’t really do justice to the challenges faced by scholars within more established disciplines, but I can definitely attest to the fact that working across disciplines can be a challenge.

Having worked in academia for many years, I’d been prepared for this on a bureaucratic level. My program is affiliated with multiple departments and multiple colleges at Northeastern. No way is that going to go smoothly. Luckily, due to some amazing colleagues, I’ve hardly had do deal with the bureaucratic issues at all. In fact, I’ve been quite impressed to find that I experience the department as a well-integrated part of the university. No small feat!

But there remain scholarly challenges to being interdisciplinary.

This morning, I was reading through computer science literature on argument detection and sentiment analysis. This relatively young field has already developed an extensive literature, building off the techniques of machine learning to automatically process large bodies of text.

A number of articles included reflections how how people communicate. If someone says, “but…” that probably means they are about to present a counter argument. If someone says, “first of all…” they are probably about to present a detailed argument.

These sorts of observations are at the heart of sentiment analysis. Essentially, the computer assigns meaning to a statement by looking for patterns of key words and verbal indicators.

I was struck by how divorced these rules of speech patterns were from any social science or humanities literature. Computer scientists have been thinking about how to teach a computer to detect arguments and they’ve established their own entire literature attempting to do so. They’ve made a lot of great insights as they built the field, but – at least from the little I read today – there is something lacking from bring so siloed.

Philosophers have, in a manner of speaking, been doing “argument detection” for a lot longer than computer scientists. Surely, there is something we can learn from them.

And this is the real challenge of being interdisciplinary. As I dig into my field(s), I’m struck by the profound quantity of knowledge I am lacking. Each time I pick up a thread it leads deeper and deeper into a literature I am excited to learn – but the literatures I want to study are divergent.

I have so much to learn in the fields of math, physics, computer science, political science, sociology, philosophy, and probably a few other fields I’ve forgotten to name. Each of those topics is a rich field in it’s own right, but I have to find some way of bringing all those fields together. Not just conceptually but practically. I have to find time to learn all the things.

It’s a bit like standing in the middle of a forrest – wanting not just to find the nearest town, but to explore the whole thing.

Typical academia, I suppose, is like a depth first search – you choose your direction and you dig into it as deep as possible.

Being an interdisciplinary scholar, on the other hand, is more of a breadth first search – you have to gain a broad understanding before you can make any informed comments about the whole.

facebooktwittergoogle_plusredditlinkedintumblrmail