Tuesday, July 07, 2009

Measuring open education

Ever since Kathy Sierra dropped the idea to mix up marketing with education, I've been looking for a way to mix up their typical research methods. (Yes, I hold on to an idea a long time...)

Last Friday, after I had confirmation that Ako would be funding us to measure the effectiveness of our open education work, I invited Shelagh Ferguson from Otago University's Faculty of Marketing, and Russel Butson from the same university's Educational Development Centre, to help start some thinking about a good method with which to measure open education. Shelagh has a growing interest in social media marketing (in the broader sense of the word) and Russel has an established interest in informal learning. The conversation that followed between us 3 revealed that Kathy was right! The two fields do have a lot to offer each other! Especially informal learning and marketing research methods.

The meeting was to gauge Shelagh and Russel's level of interest in the project. Going from the conversation, I reckon they are interested as long as it doesn't draw on too much of their time. I put it to them that I would like to spend a 1/2 a day with them and a few other researchers from other fields such as open and distance learning, to draw up a research method for measuring our open education work.

Russel and Shelagh agreed that there would be 2 aspects worth measuring:
  • Usage is the stuff that managers are typically interested in, it speaks directly to the bottom line and is relatively quick and easy to measure. It involves data such as numbers of page views, response rates, enrollment numbers, completion rates, costs, savings, etc.
  • Value has more to do with people's sensibility to the work, how they perceive its importance, usefulness, effectiveness, etc. Value based research would be broken up according to stakeholders such as staff, students, managers probably with a structured survey targeting each group.

Shelagh recommended that a project with such a small amount of funding and with such a tight time line would be better to be based on a convenience sample. We would first identify a sample of open education work we wanted to measure, and then identify individuals from each of the stakeholder groups for the value based research.

Russel has been implementing videography (as in video-ethnography) to gain incites into how students at the university interface with university services etc. It could be that such a method of data gathering could be useful for our convenience sample.

My next steps:
Identify a sample of open education work we want to measure.
Formulate a plan and method for gathering usage data.
Find a date for the half day (or full day if possible) meeting where research experts come together to formulate a method for measuring the value of the open education work


Creative Commons Licence

This work is licensed under a Creative Commons (Attribution) license.



13 comments:

kathy Sierra said...

Oh, how this post makes my day : )

Anna said...

Hello Leigh.

If you are still looking for open education resources to incorporate into your research, we welcome you to contact us at Curriki to discuss possible projects! Excellent blog, by the way!

Sincerely,
Anna

Global Curriki Consultant
abatchelder at curriki.org

alexanderhayes said...

Open ?

How do you measure implicitness ?

Interesting.....my word verification is 'brent'.

Leigh Blackall said...

I'm hoping the marketing researchers will be able to help with that. You see, its easy to measure usage, such as how many views a video resource has had, even how many of those views were from people using Polytechnic computers.. and from all this usage data we will be able to say how much it has cost/saved us in terms of delivery costs. As well, we can see how many free-to-reuse resources were used in any given resource - such as a CC photo in a slide presentation, or a CC sound track in a video.. and we can give this sampled media a worth based on common royalty fees for equivalent media, telling us how much we have saved or gained by using free-to-reuse media.. etc etc.. we can get a lot out of usage that the managers will be interested in.

But as you point out - how do we measure implicitness.. or what the marketing researchers call "value" or what the educationalist seem to prefer to call "learning outcomes". The tools education use to measure outcomes are crude and very unsophisticated. The tools that marketing use to measure (if that's even a right word) "value" are much more in depth.. such as ethnographic methods.. observing behavior and statements etc.

We haven't gone very deep into those methods yet.. but I can tell by this first conversation already that the the marketing perspective will give us educationalists a fresher set of methods and questions.

alexanderhayes said...

Damned updates.

Midway through my comment reply it cut me off to install "update 2 of 3 "

Familiar to some but of course you wouldn't have that problem Leigh:)

I appreciate the elucidation of what your growing.

Question still remain around OER's effect on organization aptitude to embrace change or process-flow-disruption.

The investment in attribution is any interesting one. It would mean that the military style conquer-and-rule form of education is giving way to networked peasant-virtue osmosis.

Likewise those conditioned to ownership and re-designing rules are definitely interested in worth...ie. cost ratios

The challenge I suspect you face is in value. Market forces speak of value ....cleverly disguised worth.

Implicitness.

Google is a good example. So are the Wiggles.

Does OER sell a good thing or is it really changing the face of education.....or learning in fact ?

Yes....nice blog. Still thinking you could do a lot to get your message out there using http://leighblackall.com/

Leigh Blackall said...

Nah, I'm sticking with the free range. I hope you didn't buy that domain name.. if so, I should probably buy it off you at least and set it to direct into this blog.

You're right of course, the measure of usage is only relavent as a measure against what we already know.. and practically useless in defining what is merely emerging. The only use would be as an argument to make room (or not) for continued development.

The OER label is a problem, because the OER I refer to is mostly about a practice and a perspective. Like teh free ranging perspective. All OER is then, is a willingness to license media CC BY or SA, to put it on popular media platforms, preferably in formats that are open standard. The practice follows in that (I reckon) it inspires knew ideas for doing business and trading services.

The OER that others refer to is trapped too far into that old way, probably because they, like me, were always asked to justify their work with measures against old methods, without the resources to do the really valuable investigation into values.

Thanks for the comments Alex, I must watch out for that trap. There's another trap though.. the majority of the education sector that I have worked in are extremely conservative and so most if not all demand that measure against established methods. And it is this that holds them back from discovering new practices. So, I might not be able to show much it all in terms of value at Otago Polytechnic, because it will be such a small and local sample so as to be only relevant to Otago.

Parag Shah said...

Hello Leigh,

Along with measuring 'usage', and 'value', is it possible to measure 'learning', using methods other than traditional testing?

Is is possible for a system to gauge 'proof of learning' from a learners blog posts, comments, participation in forums, etc?

Such a system may be able to take open education to the next level, which makes not only learning material available to learners but also a system which automatically complements a university degree.

I feel this is important because such a system can capture what a learner learns through informal learning practices.

--
Regards
Parag

Leigh Blackall said...

Hi Parag,

I reckon it is possible to measure learning from such forms of evidence as a blog, weather it be a text, photo, audio and/or video blog, but I doubt that it is possible to automate it. The closest that I can see we are is through standardised assessment tools like 'units of competency' that set out what it means to be skilled and capable in X. A person in charge of assessment uses these units as a basis from which to observe, interview and look for evidence in a candidate. So, even though this existing process is still very subjective, we rely on the professionalism of the assessor and the validity of the assessment criteria.

This method only goes so far, and there are many critics of the units of competency system out there. But if we focused on general and universal skills - like servicing a car as per manufacturer's guidelines then we know the specific skill set and knowledge base to be competent at this, so we devise a standardised assessment unit for it, show a candidtae a range of methods for gathering an presenting evidence, and engage professional assessors to assess the evidence.

On an open education scene, it becomes a question then of how many of these units can we standardize across national and institutional boundaries without adversely impacting local and unique skills and knowledge, so that a pass in a particular set of competencies has currency in other areas?

Leigh Blackall said...

I meant to add that measuring 'learning' like this is not something I plan to do directly. It would be a good thing to do, and we have the methods and instruments to do it, but this project is partly about using new methods and instruments to measure a greyish area that is prior to 'learning' ie: the motivation to learn in this way.

Parag said...

Hi Leigh,

Thanks for your response. I agree that units of competency is a good way of measuring learning. I understand that this is not your focus at the moment. I hope you do not mind me exchanging ideas with you :-)

I am trying to think if there are alternative ways of measuring learning. The issue with units of competency is that it is hard to keep formal testing units current in high tech fields such as computer science because of the rate at which formal knowledge changes. Also I feel that it is hard to measure informal learning using units of competency, because a person may know just one thing about a topic which they have had to learn to accomplish something, yet it is hard to make them take a test which tests for that little subset of the field.

In an open education scenario I feel that it would be really nice if we can measure such tacit knowledge which a person may have learned informally through their footprints on the Internet.

--
Thanks
Parag

Parag said...

Hi Leigh,

Having units of competency on a wiki is a wonderful idea. This way they can get updated far more quicker and by engagement with experts rather than by arbitrary people.

Just like we rely on crowdsourcing to define units of competency, what are your thoughts on relying on crowdsourcing for validating the competency of individuals?

Cathy Davidson is doing a wonderful experiment with crowdsourcing grades at Duke University (http://www.hastac.org/blogs/cathy-davidson/how-get-out-grading)

I think such a methodology has the potential to really bring about participatory learning and ecentralized education.

Parag said...

I mean't decentralized education

Leigh Blackall said...

Hi Parag,

The main problem I see with the wiki and crowd sourcing idea is that we might not have the numbers needed to make it work. Wikipedia for example runs on a 1% rule. I red somewhere that the English Wikipedia is the product of 10 000 people's work only. I think we would need about that many people for a units of competency project.. so that counts out any national focus. It could onlly work at an international level.. and there's an even bigger idea!