e-Learning@DMU Benchmarking Blog

This is a place for some chat about the HEA's e-learning benchmarking exercise - at least in its DMU incarnation...

Monday, November 27, 2006

Hindsight
What a pain-in-the-butt hindsight can be; it generally shows that you should have done things differently and highlights the problems in your decision-making process. For instance, how do you arrive at a collective, selectoral decision about specific players when only one person has the whole picture about those players (in England's case the coach) and yet his view may be coloured by the past, or by his personal preferences, or by his inability to look beyond the defensive or the hopeful. There's something of the soothsayer in this; in retrospect we can all see that the collective success at the Ashes pivoted around a team where each individual contributed. (Even Ian Bell, folks! Two fifties at Old Trafford set Vaughan up for his ton and enabled us to take a whole day to try to bowl them out; plus after Jones he took the most catches.)

Does that team ethic exist now? Or are we desparate for certain names to perform? Selecting Harmison and Anderson look like gambles after-the-fact, and the choice of Jones and Giles instead of Read and Monty hardly changed the game. I don't think that this is a case of the heart ruling the head, but I do wonder whether Fletcher's conservatism will enable us to press home the few advantages we have over the baggy greens.

So what do we do ahead of Adelaide? Maybe thinking a little less about them and a little more about us. We have to win a test, which means we have to take 20 wickets (given that we took 10 in Brisbane this looks ominous). So the team needs some clear-cut, strategic-yet-ruthless decisions to be taken now: namely, drop Harmison and Anderson and play Mahmoud and Monty; going with two spinners in Adelaide plus KP means that you need a proper keeper, so re-instate Read; remove the burden of captaincy from Fred and give it to Strauss so that the former can concentrate on trying to win us the game without the weight of decision-making for the collective around his neck.

So today's benchmarking bit with reference to "clear-cut, strategic-yet-ruthless decisions". Are we going to get some from benchmarking? Will hindsight show that those models we have deployed, based on history or personal preference, have failed to deliver value-4-money? What will it unearth about the hidden implementation of e-learning? From talking to some of our benchmarking team it seems that we're getting to the point now where some of the nuances of implemetation are being unearthed: notably strategic and cultural differences within faculties and departments, and amongst students.

But also, what will that history tell us about our future? Do we stick with the same approach that seems to have bought us long-term gains based on a few quick wins so far, or is it time for a re-think? If so, is that re-think based on what's needed now, or in the next 5 years - and can we know that anyway?

BTW The mighty Saddlers won 2-1 and sit 7 points clear. They aren't vexing me enough to comment here. Yet. You'll be pleased to know.

Friday, November 24, 2006

stickability

I figure that today's post should focus on benchmarking - the least said the better about that batting performance really. Except that this game could be the making of Ian Bell, a man who represents the future of the England top-order. Much of his torment in 2005 at the hands of Warne and McGrath was a function of slight misjudgement outside off-stump, which the best players work to correct, or plain bad-luck - in the case of Warne's third leg-spinner at Lords, which Bell read but that didn't turn. On the back of that the media and the public were quick to judge - them's the breaks - but the advent of Web2.0 technologies and cultures mean that we are all critics now.

Anyway, Bell's mental strength over the past year highlight that the risks taken in sticking with him were worth taking. He will go on to score 8,000 test runs and average over 45. For all his faults this is one element of Fletcher's leadership that counts - the ability to take advice and then judge a performer in light of the bigger picture. What or who will make a long-term difference to performance? Are we willing to take some short-term hits along the way?

So I'll be interested to see whether we can uncover an e-learning picture @DMU that looks long-term. One that accepts the hits (servers down, network grumbles, a staff approach to e-learning that is too didactic, implementing a technological approach to e-learning that is too too didactic) for longer-term gain. This might be uncovering good practice and celebrating it, but it might also involve celebrating those who have stuck with an apparently risky approach to embedding a new technology because it was pedagogically the right thing to do, or because they saw something elsewhere that triggered an idea and moved them beyond their own technological safety-net to try something new. Testing personal boundaries, trying new stuff and sticking with it - that's what I'm interesed in seeing.

Anyway, latest news from benchmarking-central is as follows.

  1. All area leads have been in-touch to identify how they are going to collate their evidence. This generally involves talking to/emailing: the other Faculty e-Learning Co-ordinators; Chairs of Faculty Learning and Teaching; PVC(s); Drectors of Library/Information Services; Chairs of Subject Authority Boards and Programme Leaders/Heads fo Department; staff e-learning champions in the faculties. We're also looking back at current and past student evaluation. One thing that is interesting about the OBHE methodology is that it can be read as a very corporate approach, focused upon staff. Hmmmh.
  2. There's a meeting fast approaching to discuss progress (date tbc). At that point the internal project Wiki will be released, as a store for info about e-learning doc's and for us to write-up our findings and collectively edit our work. I'm looking forward to seeing if that works.
  3. Ou IRD has a section on instituitional drivers for e-learning. We're going to work that one through together and then discuss our thoughts/matrix with the movers-and-shakers. It will be interesting to see if what we think as leaders-on-the-ground accords with the thoughts of the powers-that-be...

There will be no posts for the next two days; I don't work weekens are neither should you. Trade Unions worked hard to get us 2 days off, so make use of them. However, Monday will herald two hugely positive things: the new Faithless album; and the rise of a third narrative herewith, namely Walsall FC's inexorable progress back towards the giddy heights of League 1.

FYI This morning I have mostly been listening to 'News and Tributes' by Futureheads, and 6Music. I do not think that this has impacted my writing. But you never know...

Thursday, November 23, 2006

now I'm not one to encourage Richard with this tenuous link thing for too long - god, 6 weeks, not sure I can last that long - but...
My take on the benchmarking has to be that I am interested in us knowing where we are and that we have improved or not. I am less interested in knowing that we are better/worse than some other HEI up the road or far away because comparisons are very difficult at anything other than a simplistic level and can/do lead to Daily Mail type league tables which forces us to play a game which is unhelpful, indeed counter-productive to bothus as an institution and the sector in general. I know that seems all a bit wishy washy liberal but, hey, you can't fight your true self (well not at my age!). all of that will only lead to someone saying "why aren't we as good as those Aussies at cricket then?" or calling for the sacking of England Managers because there is a feeling that we still run the empire and should be able to rule the world at whatever game we choose to play (there, got the link in after all!)
so, as long this process benefits us and our students I think it is worth doing. when it becomes a stick to beat us with because we have fallen below Poppleton in some e-learning league table then it is time to draw stumps and go home...
Strike-bowlers

So I'm pondering whether:
1. you can win a Test with only one (mentally) fit strike-bowler on the pitch;
2. Duncan Fletcher really is the Emperor's new clothes; and
3. Monty kicked a few doors in when he found out the bad news.

How can such a conservative strategy, at a venue where the Aussies haven't lost since 1988 and which doesn't see many draws, ever produce a positive result for us? Surely the management would have weighed-up Harmison's lack of physical and mental strength - but then the Trescothick farrago shows the triumph of hope over judgement - or seen that to win this game you have to take 20 wickets. Other than Flintoff we only have one strike-bowler and he's carrying the drinks-tray. Ugh.

So how does this relate to benchmarking then? Well, the two narratives are going to run pretty-well parallel in time for the next 6 weeks, so there's no escaping either, and "tenuous links" is my middle name. In particular I'm interested in:
  • strategy and management: how does your choice of technology/personnel mechanistically impact on performance in a team environment;
  • communication: what is the psychological impact of selections/decisions on performance in a team environment; and
  • innovation - how do you judge what risks to take.

So, my Ashes tip for top England wicket-taker in this series is Monty Panesar and he's not even been picked for this test. The key here is that batting-in-depth, like rolling out a single technology, will only buy you time. Like strike-bowlers, it's innovation and risk that moves you on. Seems that we learned nothing from 2005.

Wednesday, November 22, 2006

Re: benchmarking processes@DMU

The OBHE produced an Institutional Review Document that details the issues we are covering. Our approach focuses upon the following areas, with area commanders/team captains/area-leaders as follows.
  1. Lead areas
Context - Richard Hall
Strategy Development - Harish Ravat
Collaboration and Partnership - Richard Hall
The Management and Leadership of e-Learning - Heather Conboy
Resources for e-Learning and Value for Money - Atul Mamtora/Jon Tyler
e-Learning Delivery - Malcolm Andrew
e-Learning and Students - Parminder Kaur
e-Learning and Staff - Steve Mackenzie
Communications, Evaluation and Review - Nick Allsopp

2. Information gathering

Each area-lead will develop and propose a mechanism for gathering data. They will identify the key issues that they wish to address, who they wish to be involved/help in gathering data, the types of information already available and the areas that need more work. They will draw up a schedule of work and agree this with Richard Hall by next Wednesday 8 November.

Note 1: there are extant networks already in existence: for instance, the Humanities Champions; the Business and Law departmental co-ordinators; the Health and Life Sciences school champions. Where possible we will involve these networks in our work - this is critical.
Note 2: our work may involve staff/student focus groups/surveys. Where possible I would like to co-ordinate this so that we can utilise focus groups to gather data for multiple areas. We are also looking to deliver a small student survey via the library PCs.
Note 3: negotiating work-loads for this exercise is critical. I do not want this to become all-consuming. The due date for our IRD is 22 January 2007.
Our current e-learning strategy and plan need re-thinking. They need to adapt from where we are to where we want to be. I have no answers to this, but I would like to co-ordinate the production of a responsive framework and approach to delivering e-learning solutions and support to staff and students. This means opening-up networks across traditional groups and looking at new opportunities refracted through innovative technologies.

This mans that ou benchmarking approach will be incorporating: what staff and students like about what we do and how we do it; what they don't like; what the blockages are to their development of e-learning in-and-around their curricula; I'd like to know what would help?
whoa! forgot the answer to the key question: why did you pick the methodology that you picked. We picked OBHE.

A few reasons here folks:
  1. we met some of the OBHE guys and their pilot-phalanxes at the HEA event in London and I thought we could do business with them;
  2. the methodology gives us room-for-manoevre, and aligns with our institutional ethos;
  3. there's flexibility within the themes that we can develop; and
  4. I like their ethos: respect for the plurality of missions; non-prescriptive; the importance of leadership; continuous improvement; fact-based management.

I kinda think that some decisions aren't really best mulled-over. For instance, deciding whether-or-not to pick your best wicket-keeper or a journeyman-glovesman with a falling batting average for the Ashes should be a no-brainer. You've got your key criteria (e.g. won't drop Ricky Ponting on 1 and lose you the urn) and your cenral ethos - check which align and stick with them.

We (well okay, I) have signed us up to a HEA-led benchmarking process. Now I wonder (no offence meant to the HEA) whether this has as much to do with benchmarking as England have to do with successful one-day cricket. I'm not saying it hasn't but I'm a little confused.

This looks more like internal baselining of current practice using one of a range of methodologies. Moreover it looks like we have the green-flag to use an adapted form of the chosen evaluative path. We are certainly not involved in standards or results-based benchmarking, although I'd be willing to chew the fat with you about whether it is process-based benchmarking. Across the HE sector we don't seem to be evaluating the processes that lead to specific outputs, in part because we are all pulling-in different data in order to make our claims, and because we are all using/adapting different methodologies. Whether this approach allows us to objectively understand the reason for differences in performance given the complexities involved in embedding e-learning is a moot point.

However, being involved in this process gives DMU the opportunity to do two things: evaluate where we are now so that we can recast our e-learning strategy and plan, and our institutional approach to e-learning-based creativity and innovation; and to build networks of users beyond the silos in which they currently operate. This will be a continuous thing. Or process. Or activity. Whatever - it'll be something.

Along the way I hope that we can look outward to other HEIs and check out some best practice, or different practice. Or even similar practice. We'll grapple with objectivity at that point.