Getting to Know Students and their Tech Interests

7342218648_62b706ab90_z

(img: Raspberry Pi + Lego computer, Flickr: pikesley)

In “The Computer in School: Tutor, Tool, Tutee,” Robert Taylor viewed the computer as serving three potential roles for students: 1) a tutor, delivering instruction to students, 2) a tool, which students would use to achieve learning, and 3) a tutee, which students would instruct through programming and design activities and thus themselves implicitly “shift the focus of education in the classroom from end product to process, from acquiring facts to manipulating and understanding them.” My observation is that most of our current ed tech field focuses exclusively on the first of these roles– viewing computing as a tutor (online/blended instruction, adaptive testing, flipped class, Khan Academy, etc).

Part of the underlying philosophy of a 1:1 program is a desire to expand the use of the computer as a tool, since each student then has a computer as part of their school toolkit. This is especially true in a program such as ours where students own and administer the device, since the students can now customize and develop the tool to best fit their own needs, uses and interests (Do I remember right that in Star Wars, you had to build your own lightsaber before you could become a Jedi?). Our work embedding computer science into math and science classes, as well as our robotics and physical computing projects through our maker space, are explorations into the tutee role of computers, and using the programming as an oblique strategy towards non-computing curricular goals.

In my own Digital Media class this year, I am challenging myself to create as many tool and tutee opportunities for students as possible, so that they may understand and master a concept that I consider to be crucial to modern responsible technology usage: computers are not meant to be accepted “as is” and used off-the-shelf. Modern technology usage must involve the skills and confidence to modify and customize a piece of technology to fit each person individually. While it is quite dated now, I highly recommend reading Neal Stephenson’s “In the Beginning… Was the Command Line,” available as a free download from the author’s website for more on this concept.

Over at A Recursive Process, Dan Anderson shared an activity called “My Favorite” with his math students. The concept is to pick a favorite math topic from anything, and share it with the class. I love this idea, and am modifying it for my first day of class.

Read more

New Schoology Features – (Almost) Adaptive Assessment for Your Curriculum?

Schoology's new Mastery panel (help.schoology.com)

Adaptive testing is one of the largest buzz-worthy trends in Ed Tech right now– the ISTE conference was absolutely awash in companies selling adaptive testing engines, aligned with Common Core and complete with packaged curriculum materials. It’s easy to see the appeal of adaptive testing: students are assessed on a complete package of learning objectives, and any areas of struggle or difficulty are identified and targeted. Students work at a level which is appropriate for them in rigor and complexity, and can move ahead or given additional reinforcement if necessary. Unfortunately, adaptive testing systems are incredibly complex, which makes them very hard to modify to reflect each individual teacher’s course and curriculum.

Schoology has released a handful of new features to Enterprise customers over the summer which, when used together, form a very powerful formative assessment environment. By using these tools, it’s possible to build quizzes which offer students opportunities to practice skills and content as needed, and report data back to teachers in a very granular and performance-oriented manner. For classes or schools which use standards- or learning objectives-based grading and reporting, the backwards design process of writing curriculum and assessment to match those objectives fits perfectly into this new package. The combination of Learning Objectives, Question Banks with Random Questions and the Mastery reporting panel allows teachers to generate randomized practice opportunities targeted to individual or multiple performance goals, and analyze each for diagnostic data on each student’s performance. Each of these tools requires some setup to accomplish this, so let’s dive in.

Read more

Helping Students Prioritize through Calendar Naming Conventions

schoologycalendar

Last year was our first school-wide use of Schoology as an LMS. While our first year was overall a great success for adopting the new LMS platform and upgrading from Moodle, we identified a few areas that we wanted to rethink for this coming year. One of the biggest conversations we had throughout the year was about calendaring of class events and assignments. Schoology lets students see a calendar view of all of their courses, which students reported was very helpful for them. Unfortunately, the tool isn’t very granular, and it presents all types of assignments and activities as equal on the calendar. We wanted a way to differentiate calendar entries so that students could look at a daily view and be able to prioritize based on the different types of activities that they’d see.

It’s unfortunate that we have to do this manually– the ability to create an assignment within certain categories, and have those categories reflected on the calendar, would make this whole issue disappear. Even better would be a tagging system which would allow teachers and students alike to tag activities and build context around them ( planning for “Homework,” “Reading,” “Needs Extra Time” and “Individual” for example, would be very different than “Project,” “Brainstorm/Planning,” “Skype”, “Tim”). Modern task management systems are rich in context tools such as tagging or smart search.

This speaks directly to one of my large concerns about measuring the health of our LMS and digital tools– balancing and optimizing our information streams so that students can learn to manage digital communication without becoming overwhelmed and ignoring the information that teachers and the school are providing. Seeing a list of activities dated for the next day, for example, could be useful for a student who is skilled at prioritizing and triaging their workload. For a student still developing executive function skills, it could be too devoid of context to be useful. Furthermore, in-class activities may be tagged with a date, which would make them appear on a calendar as “due” the next day, when they have yet to be assigned (and aren’t meant to be done from home). To help us make our calendaring information more useful, Christina Serkowski headed up a faculty focus group at the end of last year and built out some recommendations. Based on those, we’ve come up with what we hope is coding system for teachers to use when entering activities onto the calendar.

Read more

The Egalitarian Projector: Wired and Wireless Projection in BYOD Classrooms

Standard Dongle Bundle: HDMI, Mini Display Port, 30-Pin

Over the summer we upgraded many of our projectors, which gave us the opportunity to refresh our classroom A/V model. In a BYOD school, projection can be a logistical nightmare: students bringing myriad devices with different display adapter requirements puts a burden on the IT department to have adapters available for each class. As anyone who has spent a class period on student presentations knows, valuable time is lost with students shuffling through the front of the room and exchanging adapters even if the correct ones are all present.

Logistics aside, the wired projector also presents a subtle-but-constant “sage on stage” control dynamic: whether student or teacher, whoever is presenting and plugged in to the projector controls what is being displayed. Freeform discussion, question-and-answer, or targeted inquiry are always unbalanced since only one person has the ability to display information.

In order to both create a more flexible learning environment as well as eliminate the dreaded dongle bundles, we have equipped all of our classrooms this year with both wired and wireless projection capabilities that meet our BYOD requirements.

Wired

The picture below represents our average classroom dongle bundle– HDMI, Mini Display Port and Apple 30-pin. Since our Middle School iPad program began shortly before the release of the Lightning-based iPad models, this bundle covers most of the laptops and iPads that we see on campus. It does not cover, though, Lightning-based iPads, nor many phones or tablets with mini-HDMI. Also notice that audio has to be through a separate cable. Not every student presentation requires audio, of course, but any kind of video or multimedia sharing will require plugging in two cables.

Standard Dongle Bundle: HDMI, Mini Display Port, 30-Pin

Standard Dongle Bundle: HDMI, Mini Display Port, 30-Pin

We do have a handful of Lightning adapters and mini-HDMI adapters on hand in IT, but have not deployed them into every classroom. Since we want teachers and students to have confidence in their ability to fully use every space on campus, this isn’t ideal.

Wireless

The addition to our classroom deployment this year is the use of Apple TV in combination with AirParrot. iOS and Mavericks-based MacBooks made after mid-2011 will broadcast audio and video to Apple TV’s natively. AirParrot is a client to do the same with Windows and pre-2011 MacBooks. I’ve written about AirParrot before, and last spring it didn’t totally work with Windows 8. After conversations with both Squirrels (the company behind AirParrot– I haven’t gotten to talking to actual squirrels yet) and friends “in the know” at Microsoft, it seems like the problem was a very complex display driver setup within Windows 8. Subsequent updates to 8.1 have made AirParrot much more workable for that OS as well to the point where we’re comfortable deploying it to the school this year.

A couple of implementation notes on AirParrot: since we want wireless projection to be available for students as well as teachers, we have purchased licenses for our students to use and will invite them to download AirParrot and request a license from IT if they want to put it on their school-use laptop. This is a cost to the school, but we purchase class-required apps for student-owned iPads in the Middle School, as well as student licenses for e-mail, and this seems consistent with that philosophy.

Second, Windows 8.1 is still not entirely seamless in its display configuration. In order to serve the display needs of both Tablet and Desktop modes, the Desktop mode has a built-in magnification setting which makes the text and icons more usable (instead of being ridiculously tiny as they would be naturally with the default resolution). This setting is the key instigator in display issues with AirParrot, and some devices may need it to be turned off in order to display correctly. This can result in the text and icons being uncomfortably small on the tablet display itself, which requires adjusting the display resolution. To complicate things further, the magnification setting requires logging out to change– it can’t be applied on the fly. This means it’s much more important to get one setting which can be “set it and forget it” rather than adjusting as you go. It seems as though different hardware models have different “sweet spot” combinations of magnification and resolution which will allow the display to be sufficient both a) in Desktop mode on the tablet and b) through AirParrot. The settings I ended up with on my Surface 2, for example, did not translate to the Surface 3 (the 3 looks great through the AirParrot, though!). We’ll continue to monitor this as the year develops.

Projecting a Socratic Seminar

Knowing that this is a slightly awkward first step towards truly seamless wireless projecting, I’m excited to see the ability for students to use the projector as a tool for discussion and small group work as well as lecture/presentation. When students can share information and resources with a group/class in real time rather than simply as prepared delivery, and when the projector becomes one more “open access” collaboration tool, the classroom is a more flexible and balanced learning environment.

ISTE Macro: Genius Bars, Student SWAT Teams and Student-Led Tech

4863699127_5e87b80593_z

(img: Flickr/BerkeleyLab)

One of the great advantages of being at a large-scale conference like ISTE is seeing which ideas have begun to generate critical mass of action. This is one trend that I observed across multiple poster sessions, discussions and presentations.

A recurring theme of our device program is the desire to teach students the “intentional and mindful” use of technology– using the right tools at the right time for the right task. This goal cuts across multiple disciplines and silos of information: technology usage and operation concepts, digital citizenship, information literacy, study skills and time management, and school policies are all wrapped up in the idea of intentional and mindful use. As with many issues of technology, a central question is where the responsibility for this body of knowledge lies. While device programs push technology in schools from isolated computer labs to integrated classroom use, there is still a need to support teachers and students with expertise and resources, especially when 1:1 and BYOD programs shift the use of technology from programmatic and sequential to “just-in-time.”

While we want to avoid the Digital Native oversimplification of “The students know this stuff already,” they have experience using devices and software across disciplines and scenarios that can directly benefit other students. In addition, there is incredible instruction and learning that can happen through students examining technology usage in a rigorous manner and becoming “coaches” for technology use. Many schools have grown rigorous and robust student-led technology programs to support teachers and students throughout the campus on a range of technology concepts. These are some of the programs that I saw at ISTE this year (and a couple of others that I’ve since stumbled across).

In addition, Jennifer Carey has written a post about a DIY Genius Bar presentation that she attended at the EdTechTeacher iPad Summit in February, and Burlington High School shared their program via the ISED mailing list this year.

While these programs differ in scope slightly (mainly in the amount of tech “service” they provide, e.g. hardware repair), they all offer some common threads: in addition to reactive service, they produce proactive media for their school and community about the tools and systems that the school offers. Many include digital citizenship education as part of that outreach. Some are during the day, while some operate during “non-instructional” time: lunch, open periods and before/after school. All work in collaboration with on-campus professional IT or Ed Tech staff, and they all publish their work online.

These programs channel the expertise, interest and leadership of students to the entire school community through the use of digital media. Students involved in these programs get experience in media production and communication, as well as experience with a higher level of technology usage than normal classroom applications might provide through repair/service experience, in-depth software usage, and coaching.

If you have other examples of Student Help Desks, SWAT Teams or Genius Bars, please share them in the comments below. I’ll add examples from the comments into the post as they appear.

Don’t Make(r) a New Computer Lab

Our Makerspace is, as with many schools, located in the room which used to house a computer lab. The transition from pull-out lab-based computing to immersive 1:1 environments has left a variety of spaces available to be used in creative ways. Schools looking to offer Maker and tinker-oriented programs (including robotics or other tech-based activities) can make a natural transition of that space by adding maker tech, and it even makes some logistical sense– these are rooms which are often designed to offer easy access to power and network outlets, and may have lockable storage for peripherals or laptops which can be repurposed for tools and supplies. But in the rush to revise the computer lab, have we recreated “The Computer Lab?” At the Independent Schools Educators’ Network dinner at ISTE 2014, I spent some time chatting with Kelsey Vrooman of the Urban School and Bill Selak, now at Hillbrook, about this very question.

I believe that the most important reason for 1:1 computing in schools is context: students using computers in Language Arts creates a context of use for the computer which places it within that discipline. The message of this style of use is clear: you have a variety of tools that you use to discover, experience and demonstrate the discipline of Language Arts– your computing device is one of them. The computer lab model decontextualized technology use by creating an abstract space, time and skill set for computing use, and we have abandoned that model because it no longer fits with our view of technology an integrated, immersive, just-in-time resource.

The goal of adding a Makerspace is either implicitly or explicitly expressive of some of the same desires and goals of 1:1 computing– “soft” skills or ideals such as creativity, collaboration, problem-solving and authentic work, or concrete curricular goals such as STE(A)M or 21st Century Computing/Technology Skills. As we asked with 1:1 computing, we should ask the same ideological questions about the location or environment of a Makerspace: pull-out, or push-in? Standalone, or immersive? Remote, or classroom-based?

Seymour Papert described the computer lab through the lens of systems and schools in “The Children’s Machine” by calling the lab a school’s attempt to control and homogenize a resource that it didn’t know how to adopt. The lab, he argues, is a construct borne of the school system’s need to clearly delineate expectations, input/output and “expertise” (in the form of a responsible teacher). Many of his observations about the “unknown” nature of tinkering-based learning hold just as true for the Makerspace as they do for computing. To be sure, there are logistical concerns which lead to a separate Maker space (just as, in the pre-mobile days, it wasn’t reasonable to put 1:1 device ratios in a classroom using only desktops): a Laser cutter has to be installed with specific air circulation needs, for example, and isn’t going to be rolled into a class on a period-by-period basis. That doesn’t mean, however, that many of the elements of the Makerspace can’t be mobile: materials, tools, (and more importantly:) skill sets and challenges can be pushed in to classes and contextualized just as we are now doing with 1:1.

We have reached a compromise on our campus of the personalization and contextualization of 1:1 computing for most needs, with specialized resource centers of computers for unique needs beyond that which a personal device may cover. Our publications classroom has specialized software and additional computing power for photo and image processing. The same goes for an art classroom. When Middle School students, armed with iPads, embarked on a MinecraftEDU project, we supported them with a collection of classroom laptops to run that software. The challenge in building our Makerspaces is to strike the same balance: what are Maker activities which require a specialized and purpose-built space, and which deserve to be pushed-in and integrated into class contexts?

My Watch Thinks Everyone Should Learn to Code

untitled (2)

(img: The Very Excellent DC Rainmaker)

Outside of my Education vocations and avocations, I am an avid triathlete. Triathetes have a bit of a reputation already as being tech- and data-geeks of the sports world, and being a technologist by day and triathlete by night, I’m probably not helping the curve. My tool of choice up until recently was the Garmin 910xt, a training computer which helped me analyze all of the various metrics of my training and performance. When the Wall Street Journal asked recently why so many “mere mortals” were conquering athletic feats like the Ironman, training computers like the 910xt were a large factor in their narrative.

Sadly, my “training brain” fell off my bike during a race earlier this year and was lost to the tri deities (or a very lucky course official). It got replaced by the Suunto Ambit 2S, a newer multisport (fancy word for triathlon) watch. Disclaimer: My wife works for a sister company of Suunto. We purchased the Ambit as a replacement in order to “keep it in the family.”

Two Paths Diverged

The Garmin has a feature built into its website that allows you to enter a workout plan ahead of time (e.g. certain distances, speeds or times). The watch will then cue you when it’s time, for example, to run, stop or change speeds. The ability to create and enter these kinds of workouts is a huge part of what makes training technology so appealing– based on modern training science, building more complex but specific and targeted workouts is more effective than “go run for an hour.” Side note: If you want to know more about this, you should contact my wife. She has her MS in this and trained people at the Olympics. I read some magazines and am not going to be in the Olympics. Garmin made this very easy.

 

 

 

 All of the hallmarks of a modern web-based application: Drag-and-drop editing, drop-down menus, bright friendly color-coded interface. This is designed to let you do what you want to do as quickly and easily as possible and get you on your way without ever having to see (as a dear former colleague liked to say) “into the belly of the beast.” So when I was setting up my new Suunto, one of the first questions I asked my wife was how to enter interval workouts like this.

 

“You write an app for that,” she replied.

 

Suunto’s entire backend service for their watch is not the slick “nothing-to-see-here” recipe of the Garmin interface. It’s an Integrated Develop Environment. Users develop “apps” for particular workouts, publish them to an App Store (“App Zone”), download and modify other apps– It’s some part App Store, GitHub and gym locker room swirled together.
This is how Suunto envisions creating workouts. ("Sleep Monitor," by PPIIOOTTRR)

This is how Suunto envisions creating workouts. (“Sleep Monitor,” by PPIIOOTTRR)

To really drive this home: that screengrab above is not from any hidden backend– that’s from the main App Zone page for this App. Suunto is upfront and loud-and-proud about showing you that this is a pile of code, and here’s how this App runs.

Once an App is developed, you have the ability to play with the variables in the App Zone before you download it to your device and execute the workout. If someone has the backbone of a workout that you’d like to do, for example, but you want to change the number of repeats or the amount of time, you are presented with a series of slider bars to customize it for your purposes.

(Customizing "High Intensity Intervals," by Movescount)

(“High Intensity Intervals,” by Movescount)

Again, note the Slider bar labels– those aren’t “plain English”– those are the variable names from the code. Does your average user know what “INTDIST” is?

I’ll admit that I got a little “new device whiplash” when I saw this. As with many rough device transitions, this was an issue of planning and time– I wanted to be out the door in 10 minutes on my run. I did not have time to deal with this new paradigm. So I went through the standard stages of Inconvenience (“I don’t have time to deal with something new!”), Anger/Annoyance (“Why can’t this work like my old tech?”), Dismissal (“This new stuff is ridiculous. Who needs these features?”),  and finally arrived at Open-mindedness (“Okay– What can this do and how does it work, and does it match a need or interest for me?”). Thinking a little more clearly, I can see what Suunto’s going for here– their App Zone is filled with thousands of apps that are far beyond the stock “off the shelf” capacity of the Garmin (or even of what the Ambit ships with). Even just the basic interval workouts have more flexibility than the Garmin template builder, and there are definitely times when I was using my Garmin and got frustrated at wanting to be able to get it to do something that it wasn’t an option in their interface. Suunto’s market differentiation here is giving users the keys to the entire hardware package– all the sensors, monitors, transmission protocols and output, and saying “Go nuts, people.”

With technology in general, there is a continuum which pits convenience/usability versus customization/flexibility. The operating system battles, Internet platforms, EdTech platform/program decisions and user tool choice often boil down to the essential question of “Do I trust somebody else to decide how this technology should work for me, or do I want to invest the time and energy in making it my system?” Neal Stephenson argued passionately in “In the Beginning… was the Command Line” (great short summer afternoon read!) that as a society of computer users we are abdicating the power and willingness to bend the tool to our will and instead making ourselves adapt to dumbed-down versions of consumer tech in the name of convenience. Here’s the alternative, if users are willing to accept the learning curve.

This is Not a Drill

This is not a Kickstart project or a fringe startup trying to muscle into an existing marketspace. Suunto is a well-established fitness technology company. They’ve looked at the market, though, and clearly decided that their direction is going to be in favor of customization and flexibility over ease-of-use and user learning curve. While we debate the role that a universal skill of coding has in our students learning, Suunto seems to have already decided that it’s coming and there’s a widespread enough talent and interest base to support a major product line. Honestly, I wish them luck, but… while I’m ideologically on-board with their plan, and I’m probably pretty far to the tech-savvy side of their user base, I gagged a bit at the idea that I had to either a) write an App myself or b) find and modify an existing one, just to go out and do the workout that I had planned for the afternoon.

This is the first major case that I’ve seen of a piece of consumer tech from an established major company banking on the “codeability” of their user base. As such, I think it’s a fascinating test case for the Internet of things and how hackable manufacturers will make their devices, as well as whether a consumer base will adapt to seeing scripting languages appear in everyday life. If this is indicative of a growing trend, or if this training device has legs (ha!), it may signal that the “should every person learn to code” argument has already left the academic sphere and that the consumer technology market will answer the question for us.

Say That to My Face?

(forum.xda-developers.com)

(img: forum.xda-developers.com)

A common concern of Digital Citizenship and online bullying is that many people view the “culture of the Internet” as one riddled with negativity and behavior that’s anti-social (if not outright sociopathic). The trolls and “lowest-common-denominator” debates that run below your favorite news site or online magazine scare away many teachers from using online publishing, forums or discussion boards in class. Why is it that behavior norms are so different online? An interesting structure came through this morning from 99U: “Born Hatin’: Why Some People Dislike Everything.”

Psychologist John Suler proposed what is perhaps the best known analysis of the phenomenon in the Online Disinhibition Effect. It lists six primary factors as to why we may treat others differently online than we do in person:

  1. You don’t know me. Anonymity protects the critics “real life” reputation and shields them from retaliation and owning their actions.
  2. You can’t see me. Face-to-face interactions tend to have more empathy because we can see the person we are engaging with. It’s hard to feel ashamed when you don’t even know who’s affected. You’re just a screen to me, not a person.
  3. See you later. I don’t have to deal with your instant response, or even wait for it! I can dump my thoughts on you and never return.
  4. It’s all in my head. Suler argues that online interactions can distort reality. I can make up whatever attributes about you that I want, justifying my actions.
  5. It’s just a game. The overused response of critics who do sometimes get called out: “It’s just the Internet, man!”
  6. Your rules don’t apply here. This is the internet, where closing out a live chat isn’t rude, despite the fact that leaving in the middle of a conversation would be rude in real life.

Being able to lay these out for students could open up lots of interesting ways to engage with the norms of digital culture– role-playing, for example, or acting out example comment threads could be a great way to confront the gap between online speaker and listener (albeit a bit dangerous– manage this activity carefully). Using imagery or posters to create responses or counter-arguments to these points could form the basis of a school digital citizenship campaign.

These rules aren’t just about being online, though– I see some of these in play on a regular basis in the interactions between bike commuters/cyclists and drivers, for example (which got a great treatment in this Norwegian public safety video). Furthermore, the article presents this theory in context of a larger finding: some people are inherently “likers,” who are more inclined to respond positively to new ideas, while some are inherently “haters” who will find a reason to rate things negatively. “It paints a very clear picture: no matter what you create, a small group of people will hate it, often without reason.” This is another, equally important lesson at the root of all culture and society, digital or face-to-face: You manage your own behavior, and accept that you can’t be responsible for how some people act. The balance is to accept meaningful, productive or informed critique while recognizing and discarding the trolls and haters.

Do Suler’s six factors translate to your observations or experience with online publishing and discussion? Can you see a way in which you might want to use these as a coaching tool with your students? How do you coach giving and receiving online feedback? Join in the comments below!

Write On! Touchscreen Tablet PC’s and Music

c1011a12-0609-44c0-8687-9c8973b920e0
(Microsoft Surface Pro 2, microsoft.com)
Cross-Posted May 23, 2014 at Choralnet
This week Microsoft announced the third generation of their Surface Tablet PC, and the attention it garnered shows that the market is starting to mature for these hybrid devices, which combine the processing power of a laptop with the touchscreen interface of a tablet or smartphone. To some degree, these devices (called Hybrids, Tablet PC’s, or Touchscreen Laptops) are hard for consumers to wrap our brains around: is it a tablet (albeit a more expensive and heavier one)? Is it a computer? Why would I need this when I already have x’? These devices can offer some interesting possibilities in the music technology field, but I suggest that properly understanding what these devices are meant to do will help us understand where they can best be utilized.

The Players

While Samsung and others have made Android-powered tablets that tout their increased power and productivity over devices such as the iPad, the Tablet PC’s run on the new Windows 8 platform. Windows 8 attempts to merge both a touchscreen interface and apps with the familiar Windows desktop that we’re used to from the history of that operating system. While Windows 8 got some decidedly heated feedback, the subsequent update to 8.1 has been much better received (8.1 is a free update to 8). Complicating things a bit, and driving some of the misunderstanding about the power of the Tablet PC’s, has been the release of a stripped-down version of Windows 8 (called RT) designed for mobile devices such as phones and lighter tablets. RT is the version which is meant to compete with the Android- and iOS-powered tablets, but it is limited in terms of what it can run. Developers have been much slower to embrace Windows RT and move their apps already developed for iPads and Android tablets into a third operating system. This has led to a collective impression that the Windows Tablet PC’s “don’t have many apps to run.”
If you can discard mobile-purposed Windows RT devices for the moment, devices running the full version of Windows 8 suffer from no such limitations on the programs available– since it’s a full-version of Windows, it runs everything that your Windows laptop or desktop runs on these devices as well. Rather than thinking of devices like the Surface Pro, or the Lenovo Helix or Yoga as tablets, think of them as laptops that you can write directly on. And therein lies the potential for the music field– the combination of touch interface and the computing power of a full operating system.

Audio Recording

One of the most recurring statements that I hear about working with audio recording on the iPad is that it’s much easier to do the fine controls of music editing with the touchscreen devices than with a mouse and keyboard. Being able to physically manipulate the software sliders as you would a board, drawing envelopes and filters, or manipulating the playback head for fine editing and splicing are all controls which lend themselves well to the fine finger control available in the touchscreen (or with a stylus) rather than the large and more clumsy mouse control. On a Tablet PC, we gain the ability to use this style of interface, but can apply it to fully-powered Windows software. Again, while mobile-oriented RT devices have to wait for programs to be designed specifially for that space, chances are that all of the software currently running on your Windows device will translate to the Windows 8 hybrids– your full Cubase setup, for example.
The processing power and storage capacity of these machines is significantly higher than a mobile tablet as well, and that combined with built-in USB ports means that you can use them in combination with external audio interfaces to a much greater degree than is possible with mobile tablets. While still being smaller and lighter than your traditional laptop, and thus easier to deploy in a field recording setup, it can be the computer hub for your recording needs.

Notation and Composition

As with the recording, the ability to use your full Windows programs in combination with the touchscreen interface is an intriguing combination for composition. Whatever your preference of notation program, running it one a hybrid device will allow you to “ink” and edit your manuscripts by hand using the stylus. In comparison to iOS or Android, I find the Windows 8 stylus capacity to be much smoother and higher-quality. Writing on an iPad, for example, always feels like the pen tip is a bit too thick for my tastes, and my script usually ends up being a bit “fat” and sloppy because of it. Writing on my Surface Pro 2, by comparison, feels very realistic. This review of the upcoming Surface 3 from WIRED describes writing within one row of graph paper. That level of detail makes writing within a notation program very smooth and satisfying. With a little practice, I was able to use the keyboard number pad to switch note values while writing with the stylus in the other hand for a pretty efficient workflow. And of course, with the USB interface, things like keyboard input and external sound synthesis devices are still available as well.

One More Toy?

Some people are the natural gadget-collectors, and the idea of adding another device to the quiver isn’t intimidating at all. For the rest of us, using a Tablet PC involves thinking a bit about what place in the toolbox it best occupies: does it replace an existing device? Does it make something else redundant? Thinking of these devices as tablets with more power, I initially held it up against my iPad and found it unsatisfying. It was once I decided to use my Surface Pro 2 as my full-time work machine that I understood its value– it is truly a laptop with extra capacities. As such, I added some extra work considerations (extra monitors, external keyboard) that make it indistinguishable from my previous desktops or laptops. When coming to something in graphics or audio which is best served by the touchscreen capacity, I can pick up the stylus and work directly on the screen. It’s a great combination of modes, and of course I still have the mobile flexibility. There are times when I use it in a traditional “tablet” capacity as well, although there is a lack of the apps that we’re used to from the iOS and Android space.
In the end, ironically, it did end up largely making my iPad redundant, but because most of the things that I used to do with that device have now either been scaled up to the Tablet PC or down to my smartphone. As more devices appear in the market with this model, including the (much larger) Surface 3 from Microsoft and what now feels like a steady rollout of devices from other manufacturers, a wider range of power and size will be available letting people choose whether they want a true powerhouse machine or something closer to the traditional tablets. Regardless, the combination of the full operating system and the touchscreen interface gives us huge possibilities in speciality or niche computing needs such as music and audio, where a wider range of software, diverse input/output capacity and higher processing power are all necessities.

How About You?

Have you experimented with a hybrid or Tablet PC running the full version of Windows 8/8.1? What are your thoughts or experiences? Do you have questions about these devices? Join in the comments below!

CamStudio = Malware Package

Just a heads-up to people looking for screencasting software options for PC: CamStudio bills itself as an open-source, free alternative to more expensive retail software. It may be that, although it didn’t even work for me when I installed it, but it is also pretty heavily loaded with malware. Read the installation notes closely, click “Advanced” on every dialog box, and make sure you know what you’re doing when you install this. Better yet, avoid it and move on to another option.

In addition to “Optimizer Pro,” which was pretty easy to deal with, I got popped with a nasty little tool called “webget” (description and removal instructions). To be able to get to that removal point first, though, I had to change the service properties for two services (“utilwebget” and “updatewebget”), both of which are by default set to restart whenever they’re stopped. Process:

  1. Change service properties to “Do nothing” when service is stopped.
  2. End the services
  3. Uninstall webget from Programs.

Here’s another report with totally different sets of embedded malware.