A few months ago I had the chance to take a sneak preview at a new online video education service called Grovo. I enjoyed it so much that I made a note to post about it when it finally launched. Well, that’s now happened so I suggest you take a look. As Eric Capper of Grovo told me, “Grovo is designed to teach people how to use the Internet with 2-minute videos that roll up into 10-15 minute courses featured on our dynamic learning platform. Courses cover everything from Internet Safety to Google Documents to YouTube and more.”
I’ll admit my first thoughts were negative. I reckoned that there was little I didn’t know about the web already and that I would find the videos boring. I was wrong because, as it turned out, there’s lots of important stuff I didn’t know and plenty of tools I had yet to be introduced to. I was also wrong because the videos are so short and so slick that I defy you to be bored or confused.
These little videos are a perfect example of where online learning content is going – short and simple.
Interaction in online media – a summary
The following table provides a summary of the four types of online interaction that we have explored in this mini-series of posts. For fuller descriptions, click on the links.
Type of interaction | Examples | Applications |
Selecting | Multiple-choice questions Making selections within images Making selections within audio-visual events Rating scales Hyperlinks Menus |
These interactions are easy to set up, easy for the user to work with and easy for the application to act upon because the user is constrained in what they can select by the options that are made available. However, they do not allow the user a free choice and, in when used for assessment, test only for recognition of a right answer, not recall. |
Supplying | Text input Numerical input Spoken input Drawing |
Here the user is given much more scope to make their input without the constraint of selecting from a list. These interactions are easy enough to set up but very hard for an application to act upon intelligently without extensive programming (think of all the code that’s used to process a search query). When used for assessment, all but the very simplest one word or numeric answers will need to be reviewed by an assessor. |
Organising | Matching Sequencing |
These interactions are much less frequently used generally in online applications but have a very definite role to play in interactive learning materials. |
Exploring | Scrolling Zooming and panning Audio and video transport controls Stepping backwards and forwards through a sequence of items Rotating a 3D image Moving an avatar in a 3D space |
The purpose of these interactions is not to gather information that the application can process, but rather to provide the user with an opportunity to search within a space or body of content. These interactions are engaging and immersive, and so have a valuable role to play in more user-centred online learning resources. |
Interaction in online media – exploring
And so to the last in our series of posts examining the various ways in which users can interact online. In case you missed them, you might want to look first at the introductory post for this series, Interaction in online media and the posts covering the three other forms of interaction – selecting, supplying and organising.
The fourth category – exploring – is somewhat different, in that it is much more user-centered. The purpose of the interaction is not to gather information that the program can process, but rather to provide the user with an opportunity to search within a space or body of content. The following examples should make this clear:
- Scrolling a document or menu, using scroll bars, a mouse wheel or a touch gesture.
- Navigating within an audio-visual resource, such as an animation, video or audio file. This could include rewinding, fast forwarding or viewing in slow motion, typically accomplished with a transport bar.
- Zooming or panning a large image such as a map or, on a mobile device, the contents of a document.
- Stepping back and forwards through a slide show.
- Rotating a 3D image, such as a model of a piece of equipment.
- Moving an avatar in a 3D world using keys or game controllers.
All of these interactions put the user very firmly in control – they determine what they see and how. And if we put all this in an adult learning context, you can soon see how exploring is going to be more engaging and more immersive than any number of multiple-choice questions and navigation buttons.
Interaction in online media – organising
We continue our exploration of the ways in which users can interact with online media by looking at those interactions that require users to sort and connect items on the screen.
In case you missed them, you might want to look first at the introductory post for this series, Interaction in online media and the posts covering what are undoubtedly the two principal forms of interaction – selecting and supplying.
Organising is not so prevalent as a mode of interaction but you’ll definitely need to use it from time to time, particularly in online learning materials and assessments.
Matching
Matching interactions require the user to identify related pairs in two sets of items. Typically one of the sets is made up of concepts and the other of attributes which characterise those concepts, for example:
- Match these animals with their natural habitats.
- Match these regions with their primary economic outputs.
- Match these books with their authors.
Matching can be accomplished with drag and drop interfaces or by selecting matching items from a drop-down list. Most e-learning authoring tools provide one or both of these options. Note that the lists don’t have to have an equal number of options.
There are two ways in which these interactions can work: you can have the user make all the matches and then submit their answer as a whole, or you can deal with each match as a separate answer, rejecting the mis-matches and providing feedback. The former is better suited to a mastery test; the latter to learning by having a go.
Sequencing
In this case, the user places a number of items in sequence, whether that’s their logical order (ordering) or their order of importance (ranking):
- Place these steps in their correct order.
- Place these events in time sequence.
- Place these risks in order of seriousness.
- Rank these authoring tools in order of preference.
Again these interactions can be accomplished with drag and drop interfaces or by selecting numeric positions from a drop-down list. Another possibility is that the user selects an item and then uses up and down arrows to re-position that item in the list. Most e-learning authoring tools will support at least one of these.
We have one form of interaction left to review and that’s ‘exploring’.
Cisco Launches Home Telepresence
As rumoured here a couple of weeks ago, Cisco have launched their home telepresence in the US. Named umi, it offers full HD 1080p video calling through your TV.
This could be an interesting development, because home use of this kind of technology can only help drive corporate adoption.
To use it you need a umi box, camera and remote which will set you back $599.00, as well as a subscription of $24.99 per month which gets you unlimited calls. You’ll also need an HDTV to plug everything into, as well as a broadband connection.
It’s greatest plus point as I see it, is the decision to centre it around the existing TV, rather than creating an entirely separate device. There isn’t a great deal of information on the website, but it certainly looks like it should be fairly simple to operate judging by how few buttons there are on the remote.
Of course the obvious down side is that you can only connect to someone else who has umi, and it may be some time before it gets past the early adopter audience.
Oh, and before anyone in the UK gets too excited, it could be that the broadband requirements would be a stumbling block if Cisco are considering a roll out here. It requires an upload speed of 1.5Mbps for 720p and 3.5Mbps for 1080p. Just in case that wasn’t clear that’s the upload speed. I’m using a 24Mbps service, which gives me an actual 16Mbps download speed but I’m lucky if I can get an upload speed of 1Mpbs (according to Cisco’s test, I managed 0.933Mbps).
Interaction in online media – supplying
In the last posting in this series on interaction in online media, we looked at those forms of interaction that require the user to make a selection from a number of provided options. These came in a wide variety of forms – multiple-choice questions, pictorial selections, event selections (from audio and video material), rating scales, hyperlinks and menus of every shape and size. While convenient and easy to work with, selections are limited by the fact that the user can only work with the options provided and is therefore not able to express a preference which has not been predicted in advance. And when selections are used as a basis for assessing knowledge, the user is faced not with recalling an answer from scratch (think University Challenge) but with the much simpler one of recognising the right answer from a list (think Who Wants to be a Millionaire).
There are many ways in which users might be asked to interact by supplying a response of their own making:
Textual responsesThe user may be required to type a short text string into an input field (perhaps to enter their name onto a form or make a search query) or a longer free text response into a larger text box (as with a chat program, a free-text field in a feedback form, or when responding to an essay-type question). In the case of the former, the program can more easily interpret the response, point out or correct possible errors (“Did you mean …. ?”) and act accordingly. With free text, it is much harder for a computer to make sense accurately of what the user is trying to say, so typically this type of response has to be interpreted manually, i.e. by a human.
In a self-directed learning context, textual responses are used much less frequently than they might be (and certainly much less than they were in the hey-day of instructional design, back in the 1980s). This is probably because text responses are more tricky to set up and use than, say, multiple choice questions and authoring tools no longer typically provide the functionality needed to successfully parse (make sense of) anything but the most basic text responses.
However, as long as the question is phrased in such a way as to encourage no more than a 1-2 word response, they can be used successfully to test for recall, for example:
- What is the name of the current French President?
- Which software company created the Android operating system for mobile phones?
You can constrain the user’s answer even more by using a ‘fill-in-the-blanks’ format. You may even provide a hint by showing how many letters you require and/or the first letter of the word:
- The name of the current French President is ________ _______.
- The company that created the Android operating system for mobile phones is G_____.
If your authoring tool will manage it, you can ask for multiple free text responses:
- List the applications that make up the Microsoft Office suite.
If you’re lucky enough to have the right tool, you may be able to list a range of possible right answers, perhaps some with common mis-spellings; you may also be able to look for keywords within the user’s response, provide feedback for common mistaken responses, check for or ignore the case, and so on.
The real joy of using typed responses in a self-paced lesson is that it provides the feel of a conversation between the author and user. This technique was used widely back in the 1980s but is much less prevalent today, largely because it is not considered as a possibility. We need some new exemplars of this approach to provide inspiration.
Spoken responsesAn alternative is to ask the user for a spoken response, which requires, of course, that they have access to a microphone and that their computer has sound capability. The most common application for this is during some form of live online session using instant messaging or web conferencing software. Clearly this type of interaction will normally depend on there being another human at the other end – voice recognition is improving, but as a form of interaction is usually limited to simple selections from a list, not free form responses.
Another option, within a self-directed lesson, is to have the user record a response to a question, usually within the context of a scenario. They can then listen back to their recorded response to check how well they did. This technique works well in training call centre operators.
Numeric responsesSimple numeric responses can be simply implemented using single-line text form fields as described earlier:
- In what year was the Great Fire of London?
- How many players are there in a Rugby League team?
- What percentage of the UK population dies from heart disease?
If your software allows, you can provide feedback based on how close the user gets to the right answer.
Other ways of obtaining a numeric value are using sliders or rotary controls:
- Move the slider to set manufacturing volumes for your product range.
- Move the knob to set the required volume.
Pictorial responsesAnother way of allowing users to interact in a free-form manner is by sketching, perhaps on an electronic whiteboard in a web conferencing system or in a graphics program. When you allow users to size or adjust the position of a window or icon you are also allowing a free response.
In summary, while selecting is convenient, supplying is expressive and checks for recall (or the user’s ability to copy and paste!).
See the introductory post for this series: Interaction in online media.
The Elearning Debate 2010
For the second year running, elearning developers Epic hosted The Elearning Debate at the historic Oxford Union. I was pleased to be able to attend, although this year I was without my Onlignment colleagues who were both committed elsewhere.
This year’s motion was:
This house believes that technology-based informal learning is more style than substance.
Speaking for the motion were Dr. Allison Rossett, Nancy Lewis and Mark Doughty, and arguing against the motion were Professor William H. Dutton, Jay Cross and David Wilson. The debate was chaired by Rory Cellan-Jones, the BBC’s technology correspondent.
The arguments
Allison Rossett (for) opened the debate by stating that they were not there to argue against informal learning per se, but rather to integrate the informal with the formal. She argued that formal training is the better option for training doctors, pilots and others whose roles dealt with public safety. Research was quoted to support her argument that discovery based learning isn’t reliable; novices need to be shown what to do and how to do it. To attain expertise requires structure, guidance and repeated practice and according to Rossett this can only be achieved in a formal environment.
William Hutton (against) didn’t speak with the same confidence and self assuredness as Rossett, but made an argument that focussed on the impact of the internet not just on learning but everything that we do. You cannot overestimate its importance as a source of information. Research undertaken by Hutton and his colleagues has shown that people trust the internet more than they trust traditional media outlets, and he argues that we should be asking how much students trust their lecturers.
Nancy Lewis (for) focussed her argument on the lack of structures and frameworks to support informal learning. It was her contention that as learning and education professionals we set high standards of excellence for ourselves, and that similar standards must be set for informal learning. Until we have agreement on what informal learning is (a common set of templates as she described it) and formal proof of its impact, it remained more style than substance.
Jay Cross (against) asked to think about how we learnt to walk and talk; was it through formal education or an informal approach? He believes that informal and formal already co-exist, and always have done, and that it’s only within formal teaching environments that any distinction is made. In a world in which the pace of change continues to accelerate, formal learning simply can’t keep up and people are getting the information they need from informal sources. He got the biggest laugh of the day by reading out a quote about the effectiveness of informal learning – by non other than Allison Rossett.
The debate was then opened up to the floor. There was noticeably less contribution than last year, and a real struggle to find anyone to speak for the motion.
Mark Doughty (for) asked the question ‘when did technology last make a difference to the bottom line?’ and then argued that it hadn’t, at least not for a long time. Through an anecdote about Apollo 13 flight director Gene Kranz, he argued that when ‘failure is not an option’ only formal learning will do.
David Wilson (against) like last year, was left to remind us what the motion was (or more accurately what it wasn’t). It’s not an argument about whether technology supported learning has value. He argued that the question of substance over style was only being asked by L&D, whereas for worked who are using these tools every day there’s no question that it has substance. He challenged the need for L&D to put labels on things; informal learning is part of work and doesn’t need an extra layer added to it by L&D. He closed by stating that if we view informal learning as something to be controlled by L&D we’re on the wrong path.
In the summing up Rossett argued that we need guard rails around our learning and only formality provides that, while Hutton said we should embrace the informal and that institutions have nothing to fear from networked learners.
The result:
The Ayes to the right 54, the Noes to the left 259
It probably comes as no surprise that I was with the Noes.
Much of the conversation before we went into the debating chamber was that those speaking for the motion had something of a poisoned chalice, and it was hard to see how they could possibly win. It’s easy for me to say of course, but I did think that those speaking for the motion fell very quickly into the trap of arguing for formal learning despite that not being what the debate was about.
There was some lively backchannel debate taking place on Twitter using the #elearningdebate hashtag, and Epic did a great job of pushing out audio clips on AudioBoo throughout the debate. The only additional thing that I would like to see next year is live video streaming for those not there in person.
Epic should be congratulated for organising a very entertaining and enjoyable event, that got positive feedback from everyone I spoke to afterwards. The only part that didn’t really hit the mark was ‘Magic Seth’ who demonstrated why even magicians shouldn’t rely on technology in live events.
The debate continues on the Elearning Debate website where you can view videos and photos, as well as adding your comments and casting your vote.
Interaction in online media – selecting
A few weeks ago, in Interaction in online media, I explained why I believe that interaction is so fundamental to our online experience, how it helps us to navigate, to configure, to converse, to explore, to provide information and to answer questions. I also described four basic mechanisms for interacting online – selecting, supplying, sorting/connecting and exploring. I said I’d go into these in a little more detail, so I’m starting now with selecting.
There are many forms that selections can take, some extremely commonplace, some more rarely deployed:
Multiple-choice questions (MCQs)
In this familiar format, the user is presented with a question stem and picks an answer from the options provided. Typically the stem is presented textually, but may be more elaborate than this, using images, audio or video as required. The options are also usually textual, but could as easily be pictorial. Some examples:
Santa Cruz is the capital of La Palma. True or false?
Which of the following countries is a member of the European Community?
Tick those items on the list which best represent how you feel about working with customers?
Click on the picture of the person you would select for the position.
The simplest questions ask the user to make a binary choice – yes or no, true or false. Multi-choice questions give a wider range of alternatives, typically between 3 to 6, from which the user chooses one option. With multiple selection questions, the user can choose more than one option from the list.
MCQs can be used as polls, where the objective is simply to gauge opinion, or as elements within learning materials and assessments. When the objective is to check knowledge, then well-constructed MCQs certainly can be valuable, although they can only assess recognition (of a fact, an instance of a concept, a cause or effect, a place or position, etc.) rather than the ability to recallthe same. Generally speaking, recognition will always be easier than recall. If the user needs to be able to recall something specifically to carry out a task effectively, then an MCQ (or any other interaction involving selection) will not test this adequately.
MCQs can also be used to challenge the user to make critical judgments, to think for themselves:
What would you do if you were …. ?
What do you think was the cause of … ?
How could … have been avoided?
How would you remedy … ?
Ideally the user should be provided with a response that is tailored to their particular choice. This could take the form of some immediate feedback, but could also result in the scenario being taken to another stage with further decisions for the user to take. Branching scenarios may sound complex, but in fact they are just MCQs sequenced conditionally in this way.
Pictorial selections
In this case the user is asked to select one or more parts of a picture, for example:
Identify the tibula in this diagram of this skeleton.
Where on this map of Europe is Estonia?
Identify the safety risks in this photograph.
As you would expect, these interactions are extremely useful for assessing any knowledge that has a spatial element.
Event selections
Another interesting variant is to ask the user to stop an audio track, video or animation when they spot something in particular, for example:
Press the stop button when you har jargon used unnecessarily.
Click on the pause button whenever you spot a good example of non-directive questioning.
This format, while more technically complex to implement, could play a valuable role in checking whether users can recognise particular behaviours or circumstances. It could also be implemented with a group in a virtual classroom, perhaps by asking participants to click the ‘raise hand’ button, or something similar, when they spot something occurring in a piece of audio or video.
Rating scales
Here the user is presented with a series of statements and is asked to rate each one against a pre-defined scale. This scale may expressed numerically (1-5, 1-10, etc.) or using labels (strongly disagree, disagree, etc.).
Hyperlinks
We all know what these do. A hyperlink, whether textual or pictorial, navigates the user to a different resource or a different part of the same resource. Links can be displayed separately or embedded within textual content.
Menus
Menus provide a more structured means for navigation and for accessing the various features available within a resource. Menus can be activated as simple lists, rather like multiple-choice questions, as scrolling lists, as rows or columns of buttons, as drop-down menus, as tabs, or as hierarchical trees. Menu selections can also be made by voice recognition.
All sorts of devices can be used for making selections, including keys, a mouse, a touch screen, or the user’s own voice. Whatever the device, the user is restricted to choosing from predetermined options, a constraint that is lifted when we take a look next time at the second form of interaction, ‘supplying’.
Home Telepresence Coming This Week (Rumour)
There are plenty of rumours running around that at a scheduled press event on Wednesday morning Cisco will be unveiling consumer telepresence. According to Kara Swisher at All Things Digital the hardware will cost between $200 and $500 in the US, although the lower price may be subsidised by the network carrier.
There’s no doubt that the full telepresence experience is impressive, but will the same experience be possible when piped through your home TV or PC?
We’ll have to wait until Wednesday to see if the rumours are true, but if they are is this something that you would be interested in using at home?
Top Ten Tools 2010
Image by yoppy
Since 2007, Jane Hart has been inviting learning professionals around the world to contribute to her crowdsourced list of the top 100 tools for learning.
This is my contribution for 2010 (in no particular order).
- Twitter has become my favourite way to connect with people online, and one of the first places I go to when searching for information. The unexpected and serendipitous connections you make can be at least as useful as the deliberate ones.
- Skype has little competition when it comes to voip for consumers and small businesses. It’s the tool I’m most likely to use for voice communication, and for the last two years I’ve done away with a business landline and replaced it with a Skype number.
- WordPress is the open source software that this blog runs on, as does my personal blog. Anyone that knows me will know I’m a big fan of Drupal (and nothing can beat it for complex projects) but with version 3.0 WordPress has became the ideal tool for simpler projects.
- Evernote is my ‘external brain’, the place that I use to record anything I may want to refer to in future. It’s brilliant because it’s accessible on all of my devices. It’s largely replaced delicious for me as I had too many instances of saving bookmarks to sites that later disappeared.
- Instapaper is a fabulous service that allows you to save content for later reading, and presents it in a consistent and wonderfully readable format. I save content from wherever I’m browsing, but will usually read it using the Instapaper iPad app.
- Google Reader/Reeder is still the best feed reader I’ve found. The only thing that’s made it better for me in the past year is using Reeder on the iPhone and iPad to access my account. I can’t imagine reading the quantity of content that I do in any other way.
- Google Chrome has pretty much become my browser of choice. I gave up on Firefox a long time tme ago (too slow and buggy) and although it’s improved a lot recently Safari hasn’t quite kept up with Chrome in terms of speed and stability.
- Dropbox is one of those services that you just can’t believe anyone wouldn’t be using. It’s simply the best way to keep my Macs in sync and make all of my content available on my phone, iPad and from any browser. Just brilliant!
- The iPad has already been mentioned in this list in terms of some of the apps I use, but it deserves its own place in the list. I think we’ve only just scratched the surface of what this kind of device will enable.
- Webex Meet is a service that is in beta and currently free for meetings of up to 4 people. If I need something more complex than Skype, such as document sharing, this is my tool of choice. Whether they are able to maintain it as a free service remains to be seen.
You can add your contribution here.