One thing that I’ve been trying lately is to spend about 15 minutes each day on writing about what’s going on in school and life. I’ve been using this website called 750words to do it. Basically, the site helps you set a daily writing goal of 750 words — about 3 typed pages. It’s completely private, though you can share some basic info about your writing if you want.
There’s been so much crazy activity that it has gotten hard to keep track. Writing for 15 minutes each day has been helpful for me in pulling out the key takeaways from life — what’s going right, what needs improvement, and what I’ll try next.
This isn’t news to you, probably. Successful people also tend to be reflective thinkers and good at self-regulating. But this particular strategy is one I haven’t tried since adolescence.
In March, I’m going to try to write 750 words every day of the month. If I succeed, I’m going to buy myself some FroYo.
Recently, Adrian Sannier, SVP for Product at Pearson Education, came to Stanford and gave a talk (video) about trends he sees around higher education and the rise of the MOOC. Here are some reflections.
“What other technology do you know that hasn’t changed in the last 15 years?”
Sannier points to the innovation curve that Apple and Google and Amazon are on, thanks to networks and data, and says that we need to get educational institutions on the same curve. Otherwise, he says, we’re going to become more and more irrelevant to students, who increasingly expect personalization and interactivity from everything in their lives. This impulse is a reflection of how our uses of technology are changing our goals. He says, release from our individual control the things that technology can help us do at scale, and focus our time on what we still need humans to do. Many of us are assuming that scaling up and applying analytics and web-based interaction is a good thing for higher ed — and that the most successful corporations on the planet are proof.
This post is a brief response to this article about an OLPC experiment in Ethiopia. Conversation welcome.
I think it’s interesting that the project claims to want to “teach children how to learn,” and that the technology (the tablet) itself is seen as just this neutral tool to help children do that. Before I go further, I should say that I believe that technology can play useful roles in learning processes, and I think it is great to be oriented toward the world with an eye to equity. That said, this piece raised a few issues for me. First, why would we think we have any evidence to say that the tablet taught the children how to learn? Seems like it would be rational to suggest that they probably already knew how to learn through various interactions with people and the world around them in their lives before the box of tablets showed up. Maybe they just applied the strategies they already had in order to learn to use the machine. Second, the tablet itself is value-laden. By existing in the form that it does, by enabling certain ways of interacting and excluding others, the tablet structures possibilities. For example, it says that English is the language one “should” learn (because the OS is in English, you need English facility in order to participate in the tablet’s world). But further, the tablet embodies constellations of economic and cultural practices, as well as ways of seeing the world and understanding what “knowledge” is. While the story in the article is one about absent researchers (“Look what happens with technology when we aren’t even there!”), the researchers and their values are very present indeed.
On the flip side, it would be wrong to suggest in our globalized world that some essential Ethiopian village culture is static and precious and needs to be preserved (by “us”). People choose to engage with power in ways that demonstrate their own agency, and it’s clear that these kids are far from passive recipients here. All I’m saying is that we need to remember that these types of interactions between researchers and participants are never neutral or free of power, and when we drop off a box of technology and leave, we are sending through our technology certain messages about what to learn and how to do the learning.
Today’s our last day in nyc. It’s unspeakably strange that this day has come.
Several times over the last few months, Matt has reminded me, “You know, I’ve always lived in New York. I’ve never lived anywhere else.” I wonder what it will be like for him to make this change. I hope he learns to love California. I hope I do, too.
It’s funny how futile the preparation for change can be. We made this decision in April. Talked about it extensively before then. I went on the radio to talk about my New York bucket list; then I accomplished almost none of it. We’ve been slowly packing over the last few weeks, but it never really felt real.
Can you ever really prepare to leave your best friends, your routines, and your home behind?
I’ve tried to prepare. When I’d cross the Manhattan Bridge on my way back into Brooklyn on the Q train, I’d remind myself. This is important. This is meaningful. Make sure you look out the window and encode what you see. Sometimes there are people standing in front of the window, blocking the view.
Or when I wake up at 10 on a Saturday morning, I’ll reflect on the fact of having slept through off-leash hours in Prospect Park. Again. I’ll dutifully recount my regrets: We’re so lucky to have this for Rainn (the dog). Starting July 1, it’s strip malls and sidewalks for him.
And then the moment passes, and I go back to suppressing the feeling of urgency that intermittently presages this change. I’m sure that sometime soon I’ll be sitting on my couch — this same, dog-chewed couch — but it will be in a place called Sunnyvale, CA, a place I’ve never seen before, but where I’ve somehow rented an apartment on the internet. And I’ll wonder what the hell just happened to us.
I was in an airport a couple of weeks ago, on my way home from a conference. I was talking with a new acquaintance about the decision to move to Stanford. She said something like, “That’s great. You have no strings tying you down.” She has a family. She was explaining how hard it would be to uproot everyone just for her to be in grad school.
“I do have strings,” I told her. “I’m married. I have a dog. I’m just taking my strings with me.” When I’m driving 12 hours a day across the middle states with a dog in my lap and a cramping pedal foot, I’ll try to remember how lucky I am.
And so, New York City. Thank you for the memories, even if I forget. Thanks for the opportunities, even if sometimes I failed to seize them.
The Leap input system will apparently launch this coming winter, for around $70. The low price point, combined with its finer sensitivity to gestures (when compared to the Kinect) could make it an appealing device for researchers interested in embodied learning.
Users of the Kinect and similar systems are accustomed to using sweeping arm movements to control gameplay. For example, on the Xbox 360, jumping from one interface tile to the next involves moving your entire arm in a fluid motion. It’s a big, sweeping gesture. But the Leap condenses this interaction into an 8-foot cubic space.
But 8 cubic feet of interaction space is quite small — just a 2 x 2 x 2 zone of movement. That means the Leap won’t be tracking full body movement. Only hands. This limitation could prove restrictive in terms of the kinds of learning tasks people can design for this device. For instance, it would prohibit learners from using their entire bodies to role-play an entity in a larger system. However, a device like the Leap could be great for collecting precise data about the role hand gestures play in explanation and other learning-related processes.
I love that Cynthia Chiong is recommending that apps for kids try harder to involve parents. She gives three specific ways to do this:
1) Review - as with the reporting systems, 2) Share - features that allow kids to share what they’ve done (i.e. saving artwork, completed tasks, etc.) so that they can continue the conversation outside of the app, 3) Link to real world - suggested activities for parents to do with their kids to reinforce concepts from the app to the real world.
And I agree, but all of these assume that the actual activity the kids do with an app is solitary. Involving parents in play is a tough sell, especially when parents are used to using an app as a babysitter, but it’s worth exploring ways to make it happen.
An app should work just as well if the child is alone, but designers should find ways to add value for the child — and for the adult — when an adult is willing to step into the magic circle, instead of just “reviewing,” “sharing,” and extending what the child has already done.
Seems like a bizarre idea at the moment, but given the rate at which things change I suppose I could envision us one day relying on one of these.
It’s called Ringbow, and it’s essentially a ring with a five-way button that connects to touchscreen devices through Bluetooth. Currently, it only works with Android.
With some people in educational technology bemoaning the lack of ability for devices like the iPad to support higher order tasks, like composing creative works or completing other complex tasks, I could imagine that a technology like this could help close the gap — though I doubt it would be fun to write an essay using Ringbow.
The device also points to a tempering of expectations for touchscreen input in general. For example, Nintendo has suggested that it doesn’t intend to “abandon the button” in its next Wii console, or any time soon. Maybe there are just some things that buttons do better.