Archive for January 2014

School and Faculty Blogs




I’m brainstorming a new blog for our school which would be collaboratively written by our faculty. The vision would be to showcase any aspects of our teaching and learning, and to use it as a discussion piece for our collaborative development. A couple of our staff members blog individually, but hosting a collaborative faculty think space is a little different than a personal/individual blog, so I’m looking for examples of school/department/team/faculty blogs which seek to highlight their learning communities. I’ve also attached any descriptors, mission statements or “about” lines which I think help encapsulate why the blog exists and how it serves the school community. I’ll add to this list as I continue to find more!

Of course, if you have any examples to contribute, please let us know in the comments below.

Nicholas School Blogs (Duke University, Nicholas School of the Environment) – “Our blogs showcase the classes, travels, research, internships, and events that comprise the Nicholas School experience.”

Findings in Research & Development (American School of Bombay) – “This is the American School of Bombay R&D Department’s blog. The Research and Development Department studies, prototypes (when required), designs, and develops new teaching and learning environments for the 21st century. Through this blog we share our ‘Findings’ and engage in conversations about new designs of schooling and teaching and learning. Also check out the following pages: R&D Explorations, R&D Reports and Presentations, ASB Maker Movement, and Day 9. We look forward to your participation in ASB’s Learning Journey.”

GCE Voices Blog (Global Citizenship Experience, Chicago) – “Welcome to the GCE/C2 Labs “Beta” Blog, a place dedicated to reinvent education. The term “beta” is generally used to indicate the latest release in the life cycle of a product or service. Here, we use “beta” to indicate the nature of our laboratory school, a place of creativity & experimentation. This blog contains a series of “Online Installations” (aka OnIns), through which we compile student work in interactive ways. We hope you dive into the installations and feel inspired by our motto: “Reinvent Possibility”.”

High School Bits (Bedford/St. Martin’s High School) – “High School Bits is a multi-author weblog for high school English teachers. Bits was created by instructors, authors, and Bedford/St. Martin’s editors as an interactive space for teachers to talk about their craft and to share ideas. The blog features leading scholars and master teachers along with new and emerging voices.”

The Franken-Paper: Constructing a Best Response


This post originally appeared on my first attempt at blogging on January 10, 2013. I’ve shut down that blog and am slowly moving posts over.

A key challenge in the collaborative classroom is balancing the inherent benefits of group work with the accountability and data of individual assessments. In my IB courses in particular, there’s a desire to prepare students for the types of exams that they’ll see at the end of their IB studies, while not losing the goals of the IB learner profile, which include Reflection and Collaboration. By adding a couple of steps to our assessment process with mock IB-style written exams, I am able to integrate a crowdsourcing element to our assessment which helps all the students benefit from each other’s work, without sacrificing the data from an individual exam.  I call it the “Franken-Paper;” a response produced by the class, spliced together with the best responses to each individual question.

1. Design open-ended questions.

Following the IB model of exams for our Information Technology in a Global Society course, I know that there are a certain number of questions in each category and level of complexity that students will encounter. I also know that the exam questions will always come from the same “command terms,” or question stems (i.e. “Define,” “List,” “Justify,” “Compare,” etc.) specified in the course syllabus. With that as a framework, I try and design the written assessments to always include open-ended questions matching the command terms and structure of the prompts that they’ll see in the IB exams.

The key, though, is simply to make sure that all questions are free-response and open-ended. Multiple choice, fill-in-the-blank or other “guided-response” prompts don’t work with this system.

2. Choose the “best responses.”

As I’m reading all of the submissions, I keep a document open in the background to capture the best responses. Most of my exams are submitted electronically, so it’s easy to copy and paste, but with hand-written mocks I’ll simply type the best answers in.

When I encounter what I think is a particularly good answer to a prompt, I write it down. If at some point I come to a better answer in another example, I’ll replace the first example with the second. The goal is to come to the end of my marking with the best answer for #1, the best for #2, etc. These often (in fact, so far always) come from different students’ submissions– no one student has the best answer for all questions. This is key to the discussion and analysis that comes later.

In the case of many possible “best responses,” I try and give the nod to representing the widest range of the class as possible– the higher the number of students that can identify their own contribution to the final product, the better.

3. Present the “Franken-Paper.”

I now have a exam that is spliced together from the best individual answers that the class submitted. There are a variety of ways to handle what comes next: I can distribute them for reading and have a discussion in class following, ask the students to evaluate these answers based on the rubric, or present them myself and identify what made each answer the most successful one in my reading. Since these are open-ended questions, I’m careful in this stage not to identify something as “the right answer,” but as the answer which in my reading best fits the criteria or rubric. Any disagreements, questions or alternate answers should be discussed at this point in order for everyone to see why this was a successful answer to the question or prompt.

The key is that I do all of this before I…

4. Return their individual papers and reflect.

Now that we’ve discussed the group’s best combined thought and knowledge, we can examine each student’s individual response. Any reflection, goal setting or self-evaluation now combines their individual performance as compared to other successful examples. Finally, the best response paper goes in their archives to study and review the subject in the future. Rather than having incomplete or unsuccessful responses to draw from and study, they have the best product of the class to learn from and continue to use.

The easiest variation is to divide up many of the submissions and ask the students to decide a “best answer” out of the group. This works best in small groups chosen to avoid anyone in the group choosing or discussing their answer. You can also use Google Docs to have groups construct one synchronously, using their group’s chosen best answer or using a Jigsaw method.

Testing Crestron AirMedia Wireless Display Adapter for BYOD (Laptops)

Our BYOD environment supports laptops in the Upper School and iPads in the Middle School, so we’re searching for a wireless display solution which accommodates both. I tested AirParrot and Apple TV previously, and now I’m testing the Crestron AirMedia display adapter. Our goal is to have a classroom-based wireless display system where students and teachers alike can share on the projector or other display in order to work collaboratively.

About the AirMedia


AirMedia Display Adapter

One thing that differentiates the AirMedia right off the bat from an Apple TV solution is that the AirMedia has both HDMI, VGA and 1/8″ Audio Out. This makes it much more likely to play nicely with existing classroom projector and sound systems. Where the Apple TV is built to interact with your TV or home theatre system (hence the HDMI out carrying both video and audio, requiring something like the Kanex ATV adapter to separate them), the AirMedia is clearly meant to serve existing business/educational A/V infrastructure.


AirMedia displays IP address and a connect code for each session

When connected, the AirMedia projects an IP address and an access code. Anyone wishing to present must download the AirMedia client (for Mac or PC) or App (for Android or iOS), enter the credentials given, and log in. Once one device is bound to the AirMedia and actively presenting, no other devices can connect until the projection is released, preventing accidental (or non!) hijacking of the display. The code is randomly generated each time a session is started, meaning that your access code from Monday’s class can’t be used on Tuesday.

Like the Apple TV, the AirMedia runs a network service which must be accessible from any devices that you wish to use. In other words, if you have it connected to your wired network, you won’t be able to access it from Wi-Fi unless you have your network properly configured. For testing purposes, I had my computer and the AirMedia on our Ethernet network. I’ll test with the iPad at home and report in a separate post.


To do each test, I used the device with the AirMedia as the primary display for normal daily operation, then threw some YouTube videos and streaming from at it to test the video smoothness.

Windows - Tested on a Lenovo Helix running 8.1. 2 GHz, 8 GB RAM.

Performance was seamless using the Windows device– operation on the display was nearly real-time with no noticeable lag. If I had the wireless network operation set up, I’d be using this as my main display with the Helix in tablet mode.

Mac – Tested on a Mid 2010 MacBook Pro running Mavericks. 2.4 GHz, 4 GB RAM.

Performance was noticeably laggy on the MacBook. The display continually ran 1-2 seconds behind input on the MacBook, and occasionally the videos would get choppy. Curious to see how this was taxing the OS, I noticed the following in Activity Monitor:

  • Memory usage was pretty constant: around 30 MB of RAM being used at any given time. Given that the MacBook has half the RAM of the Windows machine, my first guess was that I was maxing the RAM out. Doesn’t look like it.
  • CPU load was between 15-30% during normal operation. Jumped up to the high 60s (63-68%) when showing video. Again, that’s for the AirMedia process itself, so the actual rendering of video in Chrome isn’t factored in there.

Obviously, with a consistent 1-2 second lag, using the AirMedia as a primary display isn’t an option. If your students are using this to project on the main projector, that may not be a concern. In my quest for transparent/invisible technology usage, though, I don’t like the fact that everyone’s eyeballs would be on the screen being shared (i.e. projection) except the person manipulating it, who would have to be looking at their own computer. I’m less excited about this than the Windows performance, certainly.

Finally, while the Windows client works out of the box, the Mac client needed a little configuration to work with Mavericks. The initial install would only work for a few seconds before disconnecting because of a performance feature in Mavericks called App Nap. Turning App Nap off fixed the connection problem (Get Info on the AirMedia application, and “Prevent App Nap”), but in a BYOD environment, every student with Mavericks would have to configure this individually. Also, since the point of App Nap is to conserve battery power, this will have an adverse effect on the CPU power usage, but only while the AirMedia application is open. Crestron says that an update is coming soon.

Bottom Line

AirMedia is spendy– near $1000 a unit. Compared with an Apple TV and AirParrot, I’m not seeing anything here that is worth the extra $800 (at least). In a managed environment, or a Windows-only environment, it may be an option, and the VGA output will work with more projectors out of the box. We’re unlikely to move forward with this device, though.

ChoralTech: Advertising Concerts, Social Media and Streamlining


( via Flickr)

Cross-posted 1/24 at ChoralNet

Do you use social media outlets to advertise your events? It’s a simple goal to advertise our concerts and fundraisers using social media, but unpacking all of the terms, strategies, services and options available can become a full-time job, and one that seems very far removed from the rehearsing which we’d prefer to be doing. Nevertheless, either we, or someone else in our organization, should be able to use some basic services to help spread the word about our upcoming events.

The Basics

At the least, every organization should (in my humble opinion) use Facebook and Twitter to distribute concert information. There are myriad examples of how organizations do this, but I suggest a quick look at San Francisco Girls Chorus and Choral Arts as examples of organizations sharing information via Twitter, and The Choir of St John’s College (Cambridge) and The Bach Choir for examples of Facebook Pages. These are far and away the two services which have the most reach and through which you can have the highest percentage of your audience “passively” subscribe to you. There are others as well which you may use personally or have heard of: Google+, YouTube, Instagram, or LinkedIn among others. I’d suggest, though, that each social network attracts a different speciality or subset of the population, and we may use these personally to share information with friends or to subscribe to people in whom we have an interest. When publicizing our groups, on the other hand, we want maximum reach for minimum effort, which is why I’d suggest Twitter and Facebook as your mass communication media.

I’m Only Going to Say This Once!

Even with just having two accounts, though, repeating efforts is miserable. Nobody likes repeating themselves, and broadcasting the same announcement twice (once on Twitter and once on Facebook) isn’t a good use of time. If you use more than one social account, find ways to link them together. You can, for example, link your Facebook and Twitter accounts to that your tweets automatically appear on your Facebook page as well. Also, if you make use of many accounts (for example, your own Twitter account as well as your choir’s), you may want to sign up for a service such as HootSuite. HootSuite lets you subscribe to many different social media accounts, read them all from one place, and post to multiple places simultaneously. Think of it as the social media equivalent of being able to access all of your email accounts in the same mail program.

It’s Called A Conversation

One of the most crucial mistakes that people new to social media make is thinking about it like an email newsletter: you send information out, audience reads it. Remember that the whole point of social media is that it’s easy for people to speak back to you– don’t forget to check your account every once in a while! If someone replied to an announcement you made, you should reply back to them. After all, it’s only polite to respond when someone wants to talk about your group! Even better, if you are setting up a group account that will have little to no activity outside of announcing concerts throughout the year, make sure that you turn on email notifications in your account settings so that you will receive an email when anything happens with your account. That way, you won’t have to check the accounts manually, but rather you’ll get an email when anyone is talking about your events or posts.

What to Share?

Both Facebook and Twitter make it easy to share video and pictures with your postings. If you’re announcing a concert, throw a picture of the poster up with the post for a catching visual. If the poster isn’t done yet, have a choir member (or dropping-off spouse or parent) take a quick picture of the group warming up. Of course, the true gold would be to have a 30-second video clip of one of your pieces as a “teaser,” but that’s a tiny bit more time-consuming. If all you have is text, share the text, but it’s pretty easy to find some type of picture or multimedia to include with your posts.

Your Musicians are your First “Followers”

Once you have an account, you need to let people know about it! Posting a link on your blog or webpage is obvious, as is making sure that your Twitter and Facebook accounts are listed in the program. The reason that social media can be helpful to your advertising is that it is so easily shared, so to get those crucial first few followers, turn to your musicians. After all, they have a vested interest in packing the house too! Ask your musicians to follow the organization’s accounts, and tell them when concert announcements are going out in case they miss them. That way, they can share your announcements with all of their friends and family with one click. Obviously, those of us working in school settings need to check with our administrations regarding policies about communicating with students via social media, but make sure to clarify that you are a) using an organization account, not your own, and b) strictly disseminating information regarding the choir ensemble.

Beyond the Basics?

How do you use social media tools to advertise your concerts or events? To do use anything besides Facebook and Twitter? Have you noticed a difference in attendance/tickets since using social media? Anything to share below?

Seth Godin, Presentations and “Hitting Direct Instruction Home Runs”


(Griffey Image:

“The home run is easy to describe: You put up a slide. It triggers an emotional reaction in the audience. They sit up and want to know what you’re going to say that fits in with that image. Then, if you do it right, every time they think of that you said, they’ll see the image (and vice versa).” Seth Godin in Presentation Zen.

Godin is describing presentations in a business context, but he may as well be describing a lecture or class presentation. Direct instruction has a limited role in an open-ended or student centered classroom, but it is still a role and a part of most teachers’ toolboxes. And when a teacher decides that direct instruction is the most appropriate strategy, who doesn’t want the kind of “home run” that Godin describes?

We’Il look at how Godin suggests crafting a presentation to achieve that result, and how we can adapt those suggestions to a class setting. First, though, there’s a key question upstream: what is the point of our talk? Traditionally, a lecture was the best way to deliver large amounts of information to an audience. The lecture is born of the idea that an expert will convey information in the most effective method possible, especially because a lecture is completely scalable: the costs and resources needed to lecture to an audience of 10 and one of 100 are not markedly different.

A large part of the push against the lecture in today’s educational realm is that this is no longer the case. First,  we understand from brain science and pedagogical research that the lecture is actually a very inefficient way for the audience to learn, no matter how convenient it is for the lecturer. Secondly we have better ways of distributing information, such that we can provide our audience the content that we want to convey in ways that are much more flexible, permanent and tailored to our audience’s needs. So the question is: why lecture? Or, to tie back to Godin, why present?

Both Garr Reynolds, author of Presentation Zen, and Seth Godin approach presentations as something different than the delivery of content. Godin says that “the reason that we do presentations is to make a point, to sell one or more ideas.” Here, I anticipate a bit of skepticism from other educators. After all, we are not trying to sell something- we’re trying to teach. I’d argue, though, that the idea of “selling our ideas” is at the core of engagement. Replace “make a point” with “provide purpose,” or “relevance” or any other engagement buzzword from the teacher-effectiveness scale of your choosing, and the concepts map. Instead of delivering the content, deliver the relevance. Deliver the “why,” or the “so what.” Godin continues: “If you believe in your idea, sell it. Make your point as hard as you can and get what you came for. Your audience will thank you for it, because deep down, we all want to be sold.” Sell the importance of the content in order to prep your students and get them engaged and prepped for the learning activities to follow. So how does Godin recommend that you reach for one of those “home run presentations?”

You just replaced your "Igor Stravinsky" slide with a picture of people rioting at the Rite of Spring Premiere. You now have her attention.

You just replaced your “Igor Stravinsky” slide with a picture of people rioting at the Rite of Spring Premiere. You now have her attention. (



“First make slides that reinforce your words, not repeat them. Create slides that demonstrate, with emotional proof, that what you’re saying is true.” Again, think back to the idea of relevance. What is the picture or image which speaks to the “why” or “so what?”




This stressed out smiley face is producing absolutely no emotional response in your audience. (

This stressed out smiley face is producing absolutely no emotional response in your audience. (




“Second, don’t use cheesy images. Use professional stock photo images.” If you have identified a concrete point of relevance, jump on Google Images and find a good picture which supports that point.






A “blinds” transition will not make this slide any more compelling.




“Third, no dissolves, spins or other transitions. Keep it simple.” You hate it when your students do it. So don’t do it. If your presentation needs that much spicing up to make it exciting, go back to #1 and #2.




Is this really the best use of these students’ class time?







“Fourth, create a written document.”







Number four is the essential step for the classroom which allows one and two to exist. It’s also the hardest shift for us in the classroom. Remember the original purpose of the lecture: to be the most efficient means of disseminating information. With the backlash against multitasking building in relationship to what we are discovering about brain science and “task switching,” consider that notetaking is the ultimate form of multitasking: trying to process speech and visuals, in real time, and have enough executive function to sort important information from unimportant, or the bullet-able to what needs to be dictated word-for-word. Think back to the last time you sat in a lecture and had to take furious notes– did that process help you follow the talk, or hurt you? Now recall the last talk which inspired you or was particularly effective and memorable. How many notes did you scribble during that talk? Is there a connection?

Again, this hits to the heart of why we’re standing and delivering at this point in our instructional design: to provide relevance, context and meaning to the content. I am not the most effective or efficient means of delivering content to students. You are not either. We are lousy content delivery vehicles. We can use myriad better methods to give them the content which they will consume and make meaning of after we have primed the pump by selling them the vital importance of that content using our expertise and passion for learning and our disciplines.

Godin continues: “When you start your presentation, tell the audience that you’re going to give them all the details of your presentation after it’s over, and they don’t have to write down everything you say.” In other words, “Put your notes down. Take this message in, and think about it.” What would your students do next if all you did was sell them the “why” and then turned them loose on the how/who/when/where?


Do you agree with this “redefinition” of the role of direct instruction? Do you use this approach? Does this pair with brainstorming/preflection or inquiry activities in your class? Comment below!

What’s Wrong with this Picture?

According to this list, the #1 app for a smoother-running classroom is a timer. Not a communication tool for students to work together. Not a note-taking/organizational tool for students to save their work. Not a research tool to help them access whatever resources they need. Nope, if you want a smooth-running classroom with your iPads, invest in… a timer.

ChoralTech: Use Word Clouds to Interact with Text

Cross-posted at
Whether implicitly or explicitly, the pieces that we select for our concerts tell a story and convey meaning. As conductors, choosing areas of text to emphasize or using specific diction can be part of our toolkit to communicate our interpretation of the piece. That interpretation has to be relatively unified, though– while we may encourage our musicians to each bring something personal to their performance of a work, the ensemble still needs to perform the conductor’s intention: “Everyone emphasize your choice for the most important word in this line” is a fun rehearsal strategy but probably shouldn’t happen in the concert.
You can recruit and honor your musician’s individual interpretations using visuals, however. A word cloud is a graphic which is sourced from a piece of text. The program that you use to generate the word cloud will display the most commonly-occuring words largest, allowing the viewer to quickly see what words seem to be the most important out of the selection. Simply inputting the text of the work may or may not result in anything interesting (Whitacre’s “hope, faith, life, love” would give you four words of the same size, for example). The fun starts when you open up the word cloud to crowd submission. By asking your singers to generate the text for the word cloud based on their interpretations of the work, you can create a graphic which conveys the group’s individual reactions and shows the most common submissions.
Some examples of questions that might get this started:
  • In your opinion, which is the most (/are the 3 most) important word(s) in this piece?
  • When you listen to this piece, what emotions are the strongest for you?
  • What do you imagine or visualize while performing this piece?
Anything which generates a list of ideas will work. Remember that the software is looking for word matches, so steer towards descriptive words instead of long sentences or phrases. Once you have your list from your singers, you can upload a text or Word file to a web site such as Wordle or Tagxedo, or type the text directly into the webpage to get your graphic. Either site will give you an image file which you can then use on a web page, print in your programs, or project on the wall in the concert to create a visual aspect to your performance.
While you can take time in a rehearsal to generate the words, create the word cloud at home and bring it back to the next rehearsal, you can also generate them live with a little Google Docs-fu. For an interactive experience for your audience, imagine how you could have the audience generate one live as the musicians sing (or in between pieces/ensembles, for example). If the following instructions scare you, find a techie friend or singer to help you put it together: it’s actually not that complicated. If you construct a Google Doc which is open to the public, and feed the URL of that document to either Wordle or Tagxedo, the word cloud will generate off of whatever words have been added to the Google Doc. It’s not live, so you actually have to have someone refresh the word cloud image to pull in words as they’re being added. You could also set up an auto-refresh of your browser to do it for you. Then, it’s just a matter of connecting your audience to the Google Doc: projecting a QR code on the wall or providing a TinyURL would work for those with smartphones. For true power users, use IFTTT to create a recipe which would add texts or Tweets sent by the audience to the Google Doc.
Poll Everywhere has a free version of polling which will generate a word cloud, but the free version is rather limited in the number of people who can respond, so it’s unlikely to be useful for a large group of performance situation.
However you generate the text, though, word clouds can be a fun and powerful way to gather reactions and interpretations from a variety of people and make them part of your ensemble’s communication and honoring of the piece of music or text.

Open Question for You: 2nd Semester Resolution


In the spirit of the New Year and the changing of the semesters, what’s one reflection/resolution (Reflect-olution? Reso-flection?) from 1st semester that you’ll try or carry into 2nd semester? Share your thoughts and discuss below! Read more

Testing AirParrot and Apple TV for BYOD

One of the biggest challenges in the design of our 1:1 program is building the infrastructure in the classrooms to support a 1:1 iPad program (Middle School) side-by-side with a BYOD laptop program (Upper School). We have to build our systems to work flexibly with all types of devices, while still honoring the mobile mentality of the iPad school. A great example of this is in projecting: How do we build a wireless projection system in a classroom to accommodate everything? I’m now testing a combination of Apple TV and AirParrot by Squirrels to achieve wireless projection in a classroom for laptops and tablets together. Read more

No More Apps-by-SAMR Infographics!


Seriously, can we please stop doing this?


This is not good.

I don’t want to pick on this poster, and it’s over a year old. This is not the only example of this thinking, though, and this particular picture refuses to die. But it and its ilk need to be retired.

This is, I think, a very bad interpretation of SAMR. Chrome is not an app for “Modification,” and Symbaloo is not an app for “Substitution,” any more than a hammer is a tool which builds “Houses.” I can build a house using, in part, a hammer. I can also use a hammer to build a deck, or a birdhouse or hammer a sign to a tree, all of which have very different levels of complexity. If I’m really in a pinch, I can use it to hold down my blueprints when it’s windy or pry open a paint can.

Looking at the “Redefinition” category on this chart, you would be led to believe that using these apps will result in lessons and units which allow “for the creation of new tasks, previously inconceivable.” Exciting stuff, that! I have seen, though, many activities with Screen Chomp or Toontastic or Puppet Pals which have zero functional improvement over their arts-and-crafts analog cousins. If you do the same project with the Sock Puppets app that you used to do (or would have been able to do) with real sock puppets, then you’re still at Substitution (no functional improvement).

You cannot separate the tool from the process when evaluating technology integration. For example: this chart lists iMovie as a “Redefinition” app. Let’s play with iMovie as an app, while achieving:

  • Redefinition. Rather than interview someone and present their report based on that interview, student creates a documentary-style movie featuring the interview with their subject which contains the subject’s own words and the student’s interpretation/questions/discussion of context and relevance. Student submits video to local historical society for inclusion in related museum exhibit.
  • Modification. Student gives presentation to class while including multimedia examples produced using iMovie. This introduces new features only allowed by the technology (i.e. the “task redesign” stage), while still being similar to the original model (give presentation).
  • Augmentation. Student selects images to support or instruct a topic and records their voice/narration. This is a substitution for giving a traditional presentation, but there is functional improvement because the images and visuals can be viewed much clearer.
  • Substitution. Student makes a video of themselves delivering a presentation. No improvement on function or process from delivering the presentation in class, except that it gets the student some experience with iMovie.

SAMR is not a clean, objective rubric, nor is it a cookbook. I would wager that you could take most of the apps on that list and use them to design activities which meet the entire range of SAMR. Simplifying the process of Technology Integration to a checklist of “tools” rather than “process/product, using tools” keeps us in the cycle of technology-for-technology’s sake.

A nod, though, to a SAMR chart which doesn’t make me break out in hives:

@danielbbudd, h/t to iLearnDifferent

The difference in this chart is that it focuses on the student activity rather than the apps. For example, under Note Taking->Redefinition, the key here is “Sharing notebooks and collaborating.” He mentions Evernote as a possibility. Does it have to be Evernote? Absolutely not– could be using a blog, Diigo, Google Docs or a host of other options. The task is what differentiates the level.

The good news is that it is incredibly easy to take a project and elevate it: share it. Notice how many of @danielbbudd’s Modification or Redefinition tasks involve collaboration or publishable products. Publish them, and let the class/school/families/community/world comment on them or discuss them. Collect them together into an archive which the whole class gets to keep and refer to. Jigsaw them so that the each forms a larger class project and the whole become more than copies of the same parts. Just ask: what does this technology let me do that I couldn’t do before? 

What’s Your Take?

Am I being unfair here? Am I the one off-base? Do you use these app-based models in your planning or coaching?