SenseAbility Webinar 3

Welcome everybody to the SenseAbility Evaluation Toolkit Webinar powered by Principals Australia Institute and hand crafted for your viewing pleasure by Beyond Blue. We're looking forward this afternoon to hearing from Dr. Kate Jacobs but let's just check in with her in our green room, first of all. Good afternoon to you, Kate.

Good afternoon, Mark, and everyone watching.

Fantastic. We'll get back to you in a few minute's time so don't run away. My name is Mark Sparvell. I'm from Principals Australia, usually in beautiful Adelaide but beaming in to you live at the moment from Darwin. We have Shannon McGeary available to help you with any technical problems. She can be accessed through the text chat box. You can drop her a line and she can give you a hand if you're having any difficulties. Joining us today also we have Lee-Anne Marsh from Toorak College but we'll hear more from her a little bit later on, and we have Rene Hahn who's the education project manager for Beyond Blue. But enough about us. Let's just find out for a moment a little bit about the participants for this afternoon's session. You'll notice on the screen I'm just indicating the text chat box. Could I ask that everybody spend a couple of moments, just let us know who you are, where are you from, and what is the position that you hold in your organisation.

So, it's time for a quick meet and greet in online spaces and moderators, tech support people and participants, let's all use that text chat box to introduce ourselves, our role, and where we're from on this fine day today, thank you. Now, while you're introducing yourselves in the text chat box, this event is going to be recorded. It's being recorded at the moment and will be made available to participants after the event through Rene at Beyond Blue. We also keep a transcription of the text chat box and that will also be made available to you. During the session, you can provide feedback because it gets a little bit lonely for people who are talking to an audience they can't see. So, you've got two smiley face options. One I'm indicating is there, and you can play with that button at the moment. The other one is just there. So, you have two smiley face options, I'll just send you one in the text chat box and indicate a virtual applause at the top. So, feel free to take a moment to play with those as well. Hi Tracey, good to have you on board.

Another couple of quick tools to draw your attention to. Should Dr. Kate Jacobs ask for hands up in the classroom, there's your virtual hands up button and you're quite welcome to click that and pop up your hand as I've done. That way, we can check who's paying attention. And next door to the hands up button is a very simple yes/no tool. Thank you, Shannon. And you just click on those buttons again to undo those. So, one click to put up your hand, one click to take it down. Easy, breezy. Fantastic. Now, Dr. Kate Jacobs is a lecturer at Monash University with interests in cognition, in psychological development. Interestingly, in the variables which impact upon self-perception of intellectual ability. Dr. Kate this afternoon is going to explore with us the Program Evaluation Toolkit for SenseAbility, its processes, the participants who should engage in this, and the kinds of timelines that might need to be considered. I think this will be a really important session because we all know that data is a perishable good, it has a certain best before date and a certain use by date.

One of the things I really like about the SenseAbility Evaluation Toolkit is its blended research approach which provides both formative and summative information. So, Dr. Kate, while you're running your session I'll encourage everybody who's in the participant panel to use the text chat box to pose questions, to think out loud, to take notes that we can all see, and at certain times during this afternoon and obviously at the end we'll use that text chat to summarise and to generate a Q & A session so we can find out a little bit more about the SenseAbility Evaluation Toolkit and how it might work in our particular settings. So, Dr. Kate, we're delighted to invite you right now onto the virtual stage.

Thank you very much for that, Mark, and hello to everyone and thank you for coming along today and thank you to Beyond Blue and Principals Australia Institute for asking me to present today on program evaluation generally and then more specifically the SenseAbility Evaluation Toolkit. I'm also joined today by Lee-Anne Marsh from Toorak College who, towards the end of today's presentation, is gonna talk about her personal experience using the SenseAbility Evaluation Toolkit. Okay, so let's get started, if I can find my mouse to go across. Okay. So, I guess just before we start I'll just briefly outline what I'll be going through today.

First I'm just gonna talk, kind of generally about program evaluation so the what, why and how of program evaluation. Then, I'm going to go specifically through the SenseAbility Evaluation Toolkit, which myself and two colleagues from Monash University, in collaboration with Beyond Blue developed to provide schools and teachers with a ready-to-go toolkit to evaluate the programs that they run with their students. And then at the end, Lee-Anne Marsh will talk about her personal experiences with it and then we'll both be available to answer questions, should there be any.

So, just in case anyone's not aware of SenseAbility, SenseAbility is a program developed by Beyond Blue. It's a social-emotional learning program and it's a strength-based resilience program designed for those working with young Australians aged 12 to 18 years, so really designed to be used in high schools. And it consists of a suite of modules developed to enhance and maintain emotional and psychological resilience. And the idea behind SenseAbility is that it's really up to the school and the teacher to pick and choose which modules they would like to use with their students based on the needs of their particular students.

So, now, the what, why and how of program evaluation. So, first just to define program evaluation, it's a planned way of determining whether a program or strategy is making it subjective. So, is what we are doing with our students actually having an impact and, more specifically, is it having a good impact, rather than a negative one? So, if we relate this to SenseAbility, it's really just asking the question, is SenseAbility working how you intended it to? And we would expect that SenseAbility would work to increase the social-emotional development of the students that it's being used with.

So, why bother evaluating the programs that we run with our students? It's a good way to find out if what you are doing is working or isn't working and this is really important because resources are limited so we don't wanna be spending time and money on a program that's not having the effect that we would like it to. So, it's a good way to make sure that what we're doing is working how we intended it to.

Program evaluation can help identify the strengths and weaknesses of a program. And so, this can identify maybe particular aspects of a program that we would like to extend to other students in the school or continue on. It might identify aspects of the program that are in need of change and adaptation in order to increase the effectiveness of the program. It can help us determine whether or not to repeat the program, continue the program. And it also helps to add to the pool of evidence-based practices for supporting young people's mental health in educational settings and making sure that what we are doing is evidence-based practice, that the research does show that the programs we are using are effective is really important, coming back again to making sure that we're using the resources available to us in the best way possible.

So, when might we decide to evaluate a program? So, this can be answered in two ways. First, in terms of evaluation design and second, in terms of the school calendar. So, I'll go into each of these briefly. So, first, when to evaluate. Excuse me. I had a frog in my throat then, that's better. Okay so, evaluation design. You can start to evaluate a program at different stages. So, for example, you can evaluate a program before you start the program so, really, this is not so much evaluating the program, but evaluating what kind of program is needed, or whether a program is needed. So, conducting a needs assessment with the student, so asking them, "What are the things that you would like help with? Is it self-esteem? Is it study skills? Is it interacting more successfully with your peers? With your parents?" So finding out what's important to the important stakeholders, the students. This could also, of course, be conducted with parents and teachers as well.

You can evaluate a program at the beginning of a program in order to establish a baseline. So collecting data before the program starts, which then allows you to compare it with data collected later on, either during the program, or at the end of the program, which then allows you to determine if there has actually been a change in what you're trying to change, whether that's self-esteem, resiliency, maybe results in tests. You can evaluate while the program is in progress, so kind of getting a snapshot of how students are feeling about the program right in that particular moment before they possibly forget, if they were only to be asked after the program's finished. And then you could also evaluate after the program's finished to get a general overview of the program.

Another important thing to consider is actually when to evaluate, in terms of the school calendar, so making sure that the program is being implemented at an appropriate time of year and also that the evaluation project is being conducted at an appropriate time of year. So when might it be the most useful to conduct the evaluation? When might it be the easiest? So maybe running a program evaluation at the very end of the year when, obviously, teachers are already busy enough, might not be the best choice.

And you also wanna make sure that when you are running and evaluating a program, that it's an acceptable time; it's a valid time of year. So, running and evaluating a program in the middle of school exams might not be good, because you might get, so not so positive results back from your students but that might not be to do with the program itself but rather with school exams, being a very stressful time of year. So, considering the time that you run the program and the evaluation is very important, to make sure that the results that you get are accurate. So who should evaluate the program. So, there's two main choices here really. The person who delivers the program, can do the evaluation themself. So, with the SenseAbility program it's designed to be run by teachers. So, the teacher implementing the program, SenseAbility with the their students, who would conduct the evaluation themself. Or you can get an outsider, someone not connected to the program, to do the evaluation.

So, this could be another staff member in the school, who's not running the program themself with the students, it could probably be someone outside from the school such as a consultant, educational consultant or an academic from a university. And there are the advantages and disadvantages to both of these approaches, and we'll go through each of this now. So having the person runs the program, conducting the evaluation can be a very positive thing because that individual knows very well what the program is about. They are running each session, so they know the program, the intricacies of it very well and that can be a positive thing. These individuals also often have easy access to students, they are setting up time to collect data from students, to ask questions. Is not very hard to do, it's just already a part of their normal everyday activities. A person running the program may also have a very trusting relationship with the students, so getting accurate and valid information off them can often be easier than if it is an outsider.

The individual running the program also tends to have a very good understanding of the school or the service, and if they are into service here because SenseAbility and program evaluation can occur in schools but also in community based services such as youth programs and things like that as well. So, having the person who runs the program also doing the evaluation, can be very helpful because they have an understanding of the school environments. And so, has a good understanding of the context, and will be aware of any instances that might have occurred in the school year that could have influences on the results obtained, that shouldn't actually be attributed to the program. So, having a good understanding of the context allows you to make sure that you are interpreting your results appropriately.

However the disadvantages of the person running the program also doing the evaluation, is that sometimes they might be too close. And so unable to see any problems that are present, and this is quite a natural human tendency when we were very connected to something, very invested in something, believe in the importance of running a social, emotional learning program with our students, that we can sometimes be a bit biased towards seeing only the positives and not the negatives, and that's a very normal thing. Also the person running the program, may not have the skills needed to conduct an evaluation. So, in these kind of instances, you might decide to use an outsider. And using an outsider, some of the advantages associated with that is that the individual is independent to the program. So, it might make students feel that they can speak more freely, of course there could always be the opposite effect, that the student don't know this person. And so, they don't feel like they can speak freely. So, obviously it depends on the individual student.

Having an outsider run the program evaluation is a good thing, because they can be objective, less prone to bias. Because they have no personal investment, and they might have additional specialised skills in running program evaluation. They might be able to do things that a person who usually runs the program them self, might not be able to do. However outsiders can sometimes not understand the school context, and so might not integrate important factors into the evaluation process and therefore may reach conclusions that are not the most valid. They may have limited access to the students, because they are not there at the school every single day, and they are setting up times that are mutually agreeable to the outsider and the students might be quite typical. And they may not have a trusting relationship with the students, so the students may not feel comfortable opening up to them and talking freely about the program.

So, I guess the main points to keep in mind of when deciding who should evaluate. Is that really the best approach is to work collaboratively, when developing your evaluation strategy. So, it's important to ask for feedback. So, if you are planning to run some interviews and we're gonna talk about it in a minute, some of the different methods of actually the how of conducting an evaluation. So, if you are going to be conducting some interviews then you might want to get feedback on your interview questions, from a small group of students or from colleagues just to make sure they make sense. It's always good to ask for help when running an evaluation. So you could recruit students or run focus groups and get their suggestions on how best to run the evaluation or what kind of things to focus on and obviously other staff members as well. And this can be really helpful in creating a school-wide ownership of the program evaluation project which would then make it more likely that you'll be able to stay the course and see it through to the end. And of course it's always really important to make sure that you obtain principal approval before conducting any kind of formal evaluation and often gaining approval from parents as well is also important to do before starting.

So, how do we actually go about evaluating a program. So, to determine whether a program has met its objectives I think I believe that should be its objective rather than this. You need to first be clear about what your objectives actually are. So, your objectives must be really clear and measurable. So, for example, what might your objective be for using the SenseAbility program with your students. And you may figure this out by answering the following question as a result of this module or these programs students will and we've got a completed example on the next slide. So, as a result of completing module X, students will be able to identify and value their own individual strengths. And so the objective of students being able to identify the individual strengths is something that is really measurable. You can ask them to list their strengths before and after the program and see if there's actually been an increase in the number of strengths, so that's a really clear and measurable way of assessing the effectiveness of a SenseAbility program.

So, there's different types of evaluation designs. And we're going to talk about three here. There's the here and now snapshot evaluations. So, this is where you're gauging where students are currently and where their perceptions, and what their perceptions of the program or the module, the particular SenseAbility module are. So, really just trying to figure out what's happening right in this exact moment. And it tends to be getting quite a broad picture of what's going on. Another option is to compare two groups, so you might decide to collect data from one group that has completed the SenseAbility program and compare it to data obtained from another group that hasn't completed the program.

So, you might have two year seven classes at your school and you decide to run the program with one of them and not with the other. So you can obtain data from both of the groups and compare to see if there's any differences and I guess, you would be expecting that results would be more positive for the class that has completed the SenseAbility program. A good thing to keep in mind though, a way of running this at schools is to make sure that everything is fair and equitable is that after you run the program once with one group and compare their results to the group that hasn't received the program after that the group that hasn't received the program do end up receiving the program in the end, so it's already fair for all students.

The third option is to look at what happened as a result of the program. So, this is where you're using the same students and you're obtaining data from them before the program started and you compare it to the data obtained afterwards. So, in this instance, going back to the example of those two year seven groups, you would just run the program with all year sevens at the same time and compare their results with before the program to afterwards.

Okay, so there's lot of different types of evaluation data that can be used. So, the first one is satisfaction questionnaires. And these are just to get whether students were satisfied with the program so it's not really about whether there's been any change, in say their resiliency or their self-esteem or their academic achievement. It's just whether they enjoyed the program whether they found it easy to understand which is of course very important, if students are enjoying it, if they're not understanding it, then they're less likely to engage in it which means that there's less likely to be a change in what you're trying to change.

Another option is to conduct interviews with students so you might decide to run a focus group. So you have a small group of students, maybe five to 10, or you might run it with the whole class or you might decide to conduct individual interviews. And you could conduct telephone interviews with parents, individual interviews with teachers as well, so important to get lots of different stakeholders involved if you can. So interviews can be conducted before a program's started so these can be used to help identify the needs of the students and what they would be interested in learning and being involved in, what the parents' concerns are and what they would like the school to maybe focus on, so this can help you decide what kind of program you might like to implement. Interviews can be conducted during a program to find out what's liked and what's not liked and these can often be a good thing to do during the program rather than after the program because individuals are much more likely to remember what they like and what they don't like rather than waiting until maybe four weeks down the track when they may have forgotten.

And you can also conduct interviews after a program so to get overall impressions of the program. And so interviews are great because they can provide very in-depth information about how people think about a program, and it enables people to go, to talk about things that are important to them, so the interviews are often quite open-ended. It's up to the person being interviewed to share and talk about what they think is important. But interviews tend to be less reliable about identifying changes as a result of program participation. So because of that open-endedness, because of that ability for different interviews to go down different tracks and to produce different types of information, it's less likely that you're gonna be able to reliably conclude if the program has been effective or not in producing the objectives that you set out to object, so increasing happiness and reducing things like worry and anxiety.

Another type of evaluation data is to use questionnaires that are measuring constructs related to program objectives. So another term for constructs is maybe attributes, traits of an individual. So some examples here might be self-esteem, which is the level of the self-worth someone ascribes to themselves, how good they feel about themselves. Another important construct is resilience, so how quickly someone able is to bounce back after they've had a setback. So when you use questionnaires to collect data from the students, you can do this in two ways. You can collect data pre and post times, so the same students complete the same questionnaires before they participate in the module or other program, and then the same questionnaires after, and you can compare before results with after results to see whether there's been an increase in self-esteem, for example. Or, I mentioned this earlier, you can compare across two groups. So you have one Year Seven class that has completed the program, one Year Seven class that hasn't. They both complete the same questionnaires, and then you compare to see if maybe one group has a generally higher level of self-esteem than the other one.

So there's two types of questionnaires. They can be developed by you. So these are often a good option because they can be really specific to your particular needs. So based on your particular students and the areas you've identified that you would like to target, you can develop specific questionnaires that try to measure that. So, has this program helped you to feel more prepared for your exams or more positive about your exams or something like that? The problem with these kind of questionnaires, though, is that developing good questionnaires is actually a very hard process. And often, it takes multiple research studies to develop a questionnaire that is actually doing a good job.

So another option then is to use previously developed questionnaires, ones that have been researched and have been shown to be good measures of the constructs they say that they measure. The disadvantage, though, of using validated questionnaires is that they may not be specific enough for your needs or appropriate for your particular student group. But there are, actually, a lot of questionnaires out there that are freely available, so if you have good enough books, you hopefully should be able to find something that is appropriate for your particular needs.

Okay, so another type of evaluation data is case studies. So these could be of a class group or a whole school, so they provide an in-depth information regarding a class or a school. They might involve interviewing and/or using questionnaires with various stakeholders. And often, the case study is written up presenting all relevant background information, so outlining the nature of the school, the student body, the community, etc. It can include informal conversations and encounters that occur regarding the programs. There may be an email that you got off a parent saying how happy they were that their child was participating in the program. Maybe just a little note about a conversation you had with a staff member in the hallway, so it can include more anecdotal evidence. And so, it has the potential to provide a really rich description of what happens in a class or a school. But the downside is that the conclusions drawn of the information provided by case studies may not be generalisable to other classes or schools, because they are so particular to the individuals who are involved in it. And generally, we do want results. We want our program evaluation to provide information that would allow us to determine whether it's appropriate to use this program with other classes or with other schools.

And another type of evaluation data is the school data, so data that is already being collected by your school, routinely. And this is a really great idea to use because it removes the need for you to collect data, which can sometimes be a bit time-consuming. So some examples of existing data might be attendance records of students. So, since SenseAbility program has started being implemented in this school, has attendance rates increased? Has the number of absences decreased? And you could compare over terms and years. You could look at student achievement, in terms of the NAPLAN or ATAR scores, year 12 scores. You could look at the number of bullying instances. So has the number of bullying instances decreased since the implementation of this social-emotional learning program and so on and so forth.

Okay, so while program evaluation is very important because we wanna make sure that what we're doing is actually working and that we're using our resources the best way we possibly can, there are of course limitations associated with program evaluation that it's really good to keep in mind when interpreting the data you've obtained. So a good thing to keep in mind is that just because your evaluation has identified a problem, so for example students' scores from before the program to after the program have actually decreased rather than increased, so there's been a decrease in resilience or a decrease in self-esteem which you wouldn't expect. Or there's been negative feedback from students that haven't enjoyed the program. Just because such problems are identified doesn't necessarily mean that the program doesn't work, rather it might just indicate that refinements are needed. So maybe you need to conduct longer sessions.

So often when we run an intervention the more sessions we have the more effective the program is. So you might decide than rather than just doing three sessions, you might increase it to six or 10. The results might indicate that you need to target the sessions more around a specific skill or maybe you need more targeted sessions for a particular group. So boys face different issues and tend to have different concerns than girls, so if you're in a boys' school maybe there's a program that's particularly targeted to boys that you might wanna use rather than a program that's meant for both genders.

Another thing to keep in mind is that issues identified during an evaluation may not actually be related to the program itself but rather other factors related to the school or certain things that have happened during that set time period. So it might be that that the program itself and the evaluation was conducted at maybe not the best time of year. So during exams when students are generally stressed and quite anxious or maybe there's been a critical incident in the school that understandably had a big impact on the staff and the students, so any information you obtain at that time is very likely gonna be not accurate, a non-accurate reflection of the program but rather a reflection of what's happening in the school and for the students at that particular time.

So important to keep that in mind when interpreting results. You also wanna be cautious about your goals. So maybe not expecting too many things to change as a result of the program or expecting that all students are gonna be impacted in the same way and in a positive way by the program. So some students they need longer periods of time and some students may react differently to different strategies. And so the main thing is that, any evaluation that's occurred it provides you with more information to make more informed decision next time about where you're gonna head with your programs.

Another good thing to keep in mind is that using multiple evaluation strategies is often the best. So each evaluation strategy has its own strengths and weaknesses. So for example interviews, you can interview a few students, reference using questionnaires. Each of those have their own strengths and weaknesses. So using a combination of both can make sure that the strengths of one can accommodate for the weaknesses of the other. So it's often beneficial to use multiple evaluation strategies. So focus groups, as well as pre and post questionnaires and to use multiple stakeholders to make sure you're getting a really well-rounded view of what's happening.

And then really the main thing with program evaluation is to use the findings that you've obtained. So for an evaluation to be effective, for it to be of use, you need to use the evaluation findings. So something needs to happen as a result of the evaluation. So evaluation data can be used to change the existing program, possibly running more sessions, running longer sessions each time, running less sessions even. It may be used to consider new programs or strategies, so maybe a much more targeted program for your particular students. And evaluation data can be used to disseminate the results to others in order to show support for the continued use of the program, so sharing that information with your principal or with parents. And it can also be used to help others to learn from your mistakes which can be really be helpful.

So when reporting evaluation data it's really good to include exactly what you did, so a description of the program, what the objectives were, who delivered it and how often, and also who the target audience was, year seven male students. It's also good to indicate who conducted the evalu... Oh what you evaluated, so who conducted it, when was it conducted and how was it conducted. And providing an indication that the response rate is really good. So out of the class of 50 how many students actually completed the questionnaires, both before and after the program? And these can be important, because if only a small percentage of students complete the questionnaires then it might not actually be a good representation of the class as a whole. And then what did you find from your evaluation? What conclusions have you drawn, and what needs to happen now; so what are the action items and who's responsible for making sure those actions occur in order to improve the program and make changes.

Okay. And so, just to summarise, a good evaluation is match the program objectives, what did you try to change? Informed program change, it's planned, it's really good to plan your evaluation program before you actually run the SenseAbility program, or what other program it might be. It's good if it's collaborative, it involves multiple stakeholders, and employs multiple sources of data so that questionnaire data in addition to interviews and so on.

So now I'm just going to run you through the SenseAbility evaluation toolkit. So this was created by myself, and Dr. Andrew Rubin and Dr. Daryl Mayberry. We're all from Monash University and in conjunction with Beyond Blue. So this is ready to go toolkit that can be used to evaluate the effectiveness of the SenseAbility program, but it's also designed so it can be used with other social, emotional learning programs. So just basically what I said. The SenseAbility evaluation toolkit was developed to increase the capacities for schools and others to conduct evaluations of the effectiveness of SenseAbility as well as other social, emotional learning programs.

So there's kind of two parts to the toolkit. There's the main toolkit which really focuses on quantitative evaluation, so that's really those questionnaires. So, by the process of completing the questionnaire students end up with an individual score, indicating whether they are high, or moderate, or low on say, self-esteem or something like that. And this is as opposed to qualitative evaluation which is what the supplementary material focuses on which is really those interviews, those open ended data that you get.

So we'll go through each of the parts of the toolkit now. So the first part is the SenseAbility evaluation manual, and this is just a step by step guide that takes you through how to use the toolkit. So really, it has pictures to make things easier. Really an easy to use and follow to make sure that you know how to use each of the different aspects of the toolkit. And within that manual is a flow chart indicating what the process of using the toolkit would look like. So it’s suggested that you read the manual first. You then read the introductory material to quantitative program evaluation. And so this is just there for those who would like to learn more about program evaluation in general, and more specifically quantitative program evaluation. But for those who feel that they already know enough about this, that's something that can be skipped.

Then it's got some templates. And I'll go through what those are in a second. Then the next step is to select the self-report scales that you're going to be using. You have students complete those scales before the SenseAbility program starts, you implement the program. You have the students complete the same scales after the program's finished. You enter results into a scoring template, which we'll go through, and then you complete the self-evaluation forms. So we'll go into those more specifically. So like I said, the introduction to quantitative program evaluation. Just a general introduction to program evaluation, so a lot of what we've gone through today but with a bit more depth, and then more specifically into what quantitative program evaluation is.

Then here we've got the principal or manager and parent template, so these are ready to go templates. Lessons that you can use. Just insert your school name etc, that can be used to inform your principal and parents of the evaluation project you have planned to make sure that everyone's aware of what's going on.

Then we've got the self-report scale. So these are the measures that can be used to collect the quantitative data. So it comes with some brief instructions on how to use the scales, and then each of the scales is explained such as the example below. So here where the first scale is resilience. So you've got a brief definition for what resilience is; the ability to bounce back or recover from stress, a brief explanation of why we should bother trying to increase student's resilience; and that's because high levels of resilience have been associated with more positive outcomes such as lower levels of anxiety and depression. And then at the bottom there's a suggestion as to which SenseAbility modules you might use it with. So each of the scales have been linked to specific SenseAbility modules, and in this case we've got essential skills and purpose.

So, next slide. So, there's six scales in total, and I'm just conscious of the time so I won't go through each of those. And so these are what the scales look like. So this is the scale here is labelled 1A so one means that it's the resiliency scale, and then A is that this is the form that students complete beforehand. Then the 1B, this form is the exact same in terms of instructions, and the items. But we've just included the 1A, the 1B so you're clear as to which scales were completed before the program started, which ones after, so when you get to the data entry stage you're not likely to mess up the before and after data.

So, this is the data scoring Excel spreadsheet. So after you've collected, you've had your students complete the questionnaires before and after the program you then enter them into this spreadsheet. So we've got lots of tabs along here, so there's three tabs for each of the six scales. So here we have got resiliency time 1 and we have got resiliency time 2. So time 1 is before the program, time 2 after, and then we've got results tab, so each scale has those three tabs. And you just enter students' responses next to each of the participant numbers, and in the background, there's lots formulas there that will calculate what the students' overall score is. So after you do the before and you do the after you then go to the results tab and so this tells you, this compares... It tells you the percentage of students before the program that had a high score on the scale so in this instance it was the self-efficacy scale and it compares them with the percentage of students that had a high score afterwards, and gives you a graph as well to show you whether there has been a change. So in this instance, no students had a high score before the program so that's why there is no blue bar here. But then we had just over 60% of students had a high score afterwards. And so it gives you a statement here indicating that these results indicate that there has been an increase in self-efficacy and that the SenseAbility program has been effective.

Then the last step of... The main part of the toolkit is the self-evaluation form, so this is what you can complete after you've collected all the data and put the data into the data scoring spreadsheet and this is just designed to help you interpret the data, so there's three different sections depending on whether you've got a no difference in your scores from before to after the program whether you've got a decrease or whether you got an increase and so you go to one of the three sections and then it's just got a number of questions that you can ask yourself it's got a spot for you to jot down your score, your answer then it's got some reasoning behind the questions and then it's got another question you could ask yourself again you can write down your answers. And then in the end, you can just jot down some thoughts you've had about how you might go about changing the program or where the change is needed so just kind of a guided message thinking about the results you have obtained.

And then the second part of the toolkit is the supplementary material which is focused on qualitative evaluation and there is three parts to that, just a bit of an introduction to what qualitative program evaluation is. Then the second part goes into more depth about using interviews and case studies for program evaluation, and then you have an evaluation template. So it gives you a template for presenting the results for the case study so that you may present it to others. Okay so that's a lot of information, I'm sure. But now I'm going to hand it over to Lee-Anne Marsh from Toorak College who tested the toolkit for us and she's just going to talk about her personal experiences, so just bear with us for a second while we...

No worries and thanks for that while you guys organise a swap over to Lee-Anne, Lee-Anne Marsh for our people's information is the Health, PE and Well-being coordinator at Toorak College and we're going to hear from Lee-Anne shortly about how the evaluation toolkit is being used at Toorak College and the impact that is having, so over to you Lee-Anne.

Thank you Mark and thank you Kate for giving such a detailed explanation of the evaluation toolkit. We began using SenseAbility last year and its part of our professional development. We had somebody come in and talk to us about how you can implement the program. Following that we were actually asked if we would be interested in trialling the evaluation toolkit. So in term 4 last year we actually took that onboard and I was the person who was responsible for implementing basically doing a usability evaluation of the toolkit. I conducted this with two classes that I teach which were year sevens and year eights. So was approximately 45 students who were part of our group that we worked with. I suppose our reasoning, if am looking at using the evaluation toolkit was to get some feedback on the SenseAbility program, the effect that it was having on the social and emotional learning of our students.

We... Our line really is to look at that in conjunction with our academic scores that we get for things like NAPLAN data and overall testing data so we want to have a look at the link between the social-emotional learning of our students and their academic performance, but also the impact that it's having on their overall well-being. So when I was provided with the information. I think the, initially when I looked at the toolkit I was a bit overwhelmed with how big it appeared to be, but I suppose some advice would be that before you even get started, carefully read through the introduction because that made it very clear for me once I worked through that, it actually made it very clear that the decision about how in-depth our evaluation was going to be was really up to the school so we didn't have to do everything that was in the toolkit, we could actually pick and choose them and that was fantastic.

So, I decided initially that we would go with two scales to use. We were implementing one module at year seven and eight, so I decided to go with two of the scales, we used the resilience scale and also the optimism scale. What I found was there were no issues with the implementation of that, they were very easy to use and they were very quick to use. So at the beginning of a lesson, it only took about, approximately five minutes, and the girls quickly filled out the survey. Probably the only thing that I was asking that time was I had to explain what optimistic meant because that was in one of the questions, and so, for the younger students I had to give them explanation of what being optimistic was.

But apart from that the girls handled it very well and completed it within about five minutes. So they only conducted the program and a few weeks later we did the post test, using the exactly the scales as Kate has said in her explanation. And once again it only took about five minutes for them to do the post surveys. Probably the most time consuming part of it was putting the data into the excel spreadsheet, and I would say for the two classes during the pre and post input of data, it took approximately 40 minutes. So in the scheme of things that wasn't too long, but if you had multiple classes and for some of our teachers' at the school, who do teach multiple classes, being aware of the time that it takes to input the data is a consideration that needs to be undertaken. Yeah, so, from there it provided us with information about resilience and optimism. It was interesting that our resilience scales, the results increased significantly, but not so much with our optimism.

And in conducting a self-evaluation afterwards, there were things that made me consider how we would do things differently. We were actually tied to doing the evaluation in term four, purely because of the timing and of when Beyond Blue needed the information about the usability of the toolkit. We've now made the change, and we're actually conducting our SenseAbility program in the first half of the year, so we've already started with some of our classes for term one and for term two we will have other classes who will also be doing the modules then. I can't stress enough how important it is to actually do the self-evaluation at the end, for me it provided very, very good information about things we needed to consider with implementing the program. It was a tool that I could take back to my faculty to discuss about how we were conducting the program.

I also say it's an opportunity to look at how it's being delivered, the SenseAbility program is fantastic and it's very well stepped out in terms of how to teach it and what to teach. So it will also give me as a head of faculty, information about how well we are delivering the program and with... Some staff might need extra assistance or we might need to contact Beyond Blue for extra guidance with how we are delivering the program. Are we going to continue with it? Absolutely, because the data that we're getting back from it we see as being very important, and as I said earlier we'll be looking at that data in relation to our academic performance which we see as being very important. Thank you Mark, I think I've covered what I was planning to cover.

No worries. Thank you very much for your time there Lee-Anne, that was really inspiring, and we have so many grateful people live in the room and those viewing this as a recording afterwards, so you taking the time to share your experience from the field, particularly those key parts around timing, around the benefits and around the insights. And special thanks there to Shannon also for preparing that composite slide for you to talk from, and I know that there was a comment in the box from Tracy which said, "How wonderful to hear from a dedicated school leader, makes data collection and analysis real and achievable," and my comment to you, Lee-Anne was certainly around this being built in as good teacher practice, not as something that we add on top of our practice. So thank you very much for your time there. And what we're gonna to do now people in the room is we're going to pull together some of these threads, we're going to check back with Dr. Kate for any final comment. During this time, we'll give Lee-Anne and Kate a chance to scroll through the text chat box while I'm talking, and then we'll check in with two presenters to find out what their final words or call to action are, and certainly invite Rene from Beyond Blue to have input via text chat in regards to any of that. Obviously I'd like to thank before we wrap up Dr. Kate for her input this afternoon.

I think what you did for us, Kate, was you provided some clarity around evaluation, particularly the use of evaluation before so that we ensure that the program being delivered takes into account the context, the during, so why that's important during the program delivery to ensure that it's meeting the needs and it is able to be personalized along the way in flight, if you like. And also, the afterwards about impact. You introduced the tools, the methodology. What I really liked was that you were clear about the benefits and the limitations of particular research methodologies. Another point that came through clearly was the evidence of impact being critical, and I think that's my takeaway message, that this work around evaluating the effectiveness of programs like SenseAbility is fundamental and it's not ornamental in our work within schools. Unfortunately, sometimes, in busy schools, what we need is the kind of evidence that you've presented here this afternoon which helps to justify and therefore sustain engagement with these programs. So, let me just bounce back to you at the moment, Kate, for any further comments that you might have to make in relation to what you've read in the text chat box.

I'm not sure. Rene, do you think there is any kind of comments thread that came through in the chat box that I should speak to or...

Fine, Kate. I think one of the things that I noted here was that there was a trustworthiness and a rigour to data collection. You were very thorough with how you explained the processes. And going back to my previous comment, my sense is that you're saying it is very important for people to approach the evaluation of a program like SenseAbility almost like a researcher. Would you agree with that? 

Yes, absolutely. And that's why one of the points on one of those slides was the importance of planning. The program evaluation is not something that can be done well ad hoc. So, it is really important to sit down and plan it out, collaborate with others on it to make sure that you are aware of possible, any weaknesses in the design that you're planning to use because really the most important... The reason you're doing the program evaluation is to get the data but if you don't use the right methods and if you don't conduct that data collection in the appropriate way then the data you get is essentially useless. So, we want to make sure that we're not wasting our time so putting that bit of extra time in at the beginning to really plan it out and think things through will pay off in the end.

Terrific. Thanks for that, Kate, and I'm conscious we're on a race to the top of the clock but I would like just to check back in with our in-the-field expert practitioner and just check back in with you, Lee-Anne. What would be your call to action here? I'm a teacher in a school, I'm listening to the recording of this webinar, I'm keen to get started. What's the most important message here about evaluation? 

As Kate was saying, I think the planning of it is vital. And having the staff who are going to be involved onboard with it because if you want it to be done well then you want the people involved to really understand what it's about. So, my faculty is very well aware of the way... And have been involved in the decision making with how we're going to evaluate, and I think that's key to getting people onboard. There's often resistance when we just force people to do things but if they feel that they're part of something that's actually going to make a difference to the students within the school then I think it's more likely to be successful.

Thanks for that, Lee-Anne. Great, it's terrific. We've got it worked out, that's fine. We've got an excellent system going here. Thank you, Lee-Anne. Thank you for that, Dr. Kate, and also Rene for your input in the text chat box. What I'd like to do is just draw everyone's attention to Beyond Blue's next webinar. You can see that that's already lined up and ready to load on the 7th of May which looks at SenseAbility and how it maps within the national curriculum and helps to deliver on some of the curriculum outcomes and also the outcomes related to the general capabilities and cross-curriculum perspectives. That'll be one that people will want to write down and not miss. It's going to be very interesting, so thank you for that.

And I'd also like to draw attention while I've got a captive audience here, to our next huge webinar next week, Big History, and why it matters for education. If you have anyone in your school or your professional network who is remotely engaged in History, Geography, Physics, General Science, Biology, Mathematics, essentially anything. What Professor David Christian is going to do is to present his incredible Big History Project which is a free resource being rolled out globally. So a webinar that you really don't wanna miss, and you're gonna wanna write that one down as well next to your next SenseAbility Webinar.

Just a reminder for people that at the end of this webinar when you close out in a few moments' time, a very simple survey will pop-up. There's no more than five or six questions which will help us to shape the webinar experience, the audiences to come and provide useful feedback for our presenters from tonight's fantastic webinar the SenseAbility Evaluation Toolkit. There's a URL on the screen. Shannon will drop it in the text chat box which will take you to directly to the PAI's page of all of their free webinars which people are welcome to share wildly with networks. Thank you for popping that in, Shannon. So in wrapping up the webinar, a huge thank you again to our brains trust sitting right here, Dr. Kate Jacobs, from Monash University, we appreciate your time and input this afternoon. It's been fantastic. We'd love to hear the voice from the field, Lee-Anne, and again, we thank you for giving up your valuable time to share your expertise. Rene, thank you to yourself and Beyond Blue for the sponsorship of this webinar and making it available to the education community. So thank you everybody. We'll see you online at the next webinar.