Difference between revisions of "TheoryMarburg"

From E-Consultation Guide
Jump to: navigation, search
(Failed Tikiwiki import)
 
(No difference)

Latest revision as of 20:21, 28 December 2007

E-valuing e-consultation workshop

On 27th April 2006, in Phillips Universität Marburg, a dozen participants at the European Conference on E-Government went to a computer lab. There we used WebIQ to help us collectively brainstorm and rate ideas for valuing e-consultations.

Q1. What makes a good consultation?

Instructions: Think of the public consultations you have taken part in. List the features of a good public consultation. What makes one consultation better than another?

We came up with 82 features of a good consultation in 10 minutes of typing.

Q2. Consultation values

Instructions: In small groups discuss how we might group the features of a good consultation into categories. In doing this, try to identify the values by which we judge consultations. Then drag the features of good consultations into the appropriate categories.

Participants suggested 13 categories by which we might classify the features of a good consultation, and assigned some of these features to these categories. There was not time during the workshop to categorise all 82 items. The participants have classified half, and the facilitator (David Newman) classifed the remaining items.

Q3. Methods and techniques for evaluating e-consultation

Instructions:List every evaluation methodology or technique you can think of that might be used to measure, assess or understand e-consultations.

We came up with 34 ideas (including some duplicates).

We then went on to rate the ideas for power and ease of use. But Frank Bannister pointed out that it would be better to rate techniques according to how well they assessed each value identified previously, rather than as techniques to measure everything that happens in an e-consultation.

So we are asking researchers and practitioners to add their suggestions for how to evaluate each value to: