Like A Girl

Pushing the conversation on gender equality.

Code Like A Girl

How do we assess skills in tech interviews?

Whiteboard with dry erase markers and an eraser.

As my team is growing, I’ve had the opportunity to participate in interviews for UX Designers as well as UI Developers. As someone who is very experienced in developing skills assessments, it’s fascinating to me to see how candidates are evaluated on their subject matter expertise. It got me thinking about how we can truly evaluate skills when evaluating candidates for open positions. I’ve designed a lot of assessments for foreign languages, and I think these strategies would work well for tech interviews.

What, exactly, are you wanting to assess?

The first thing I do when I am designing an assessment is to think about what it is I want to know about the person’s skills. Do I want to assess proficiency? Ability to think on their feet? Ability to use concepts they were taught? Something else?

I then think about what I should expect at the level of experience of the person being assessed. Are they a beginner? Intermediate? Advanced? In the case of a language assessment, a native speaker?

I also consider the context. What is a meaningful task that will show me a realistic context that the person may encounter that will help them show off the skills I want to assess?

Let me give you a real life examples of an end of course assessment I designed to test spoken Spanish language in Spanish 101 (the first semester in the first year). To preface this example, I want to describe what this assessment was intended to evaluate. It was meant to test how effectively the student could hold a basic conversation in Spanish with little preparation time. I also wanted to test how well the students retained the information from the semester. I created scenarios based on the concepts used in each chapter of the book that would focus on the students’ ability to use those specific concepts. Finally, because it was Spanish 101, the expectation of using grammar structures accurately was not high. At this level, students are not at all native-like or completely accurate. What’s important is that they can make their message understood and hold a basic conversation (give and get information) that would be understood by someone accustomed to language learners’ speech.

Chapter 3: (official situation) El tiempo libre

You are considering a study away program in Lima, Perú. You decide to ask your roommate, who is going to leave next week on the program you are interested in, some questions. Ask them a) how much the program costs and how much the flight costs, b) what kinds of classes they are going to take, c) what kinds of activities they are going to do while there, d) and what the dorm is like. Also ask if they know many facts about Lima or Perú in general.

This chapter focused on how to talk about costs, money, travel, and activities. Classes and descriptions were covered in a previous chapter. The cultural content of this chapter focused on Perú.

Here are the important characteristics of this task: 1. It’s feasible and realistic. A student at this college may have to engage in this sort of conversation. 2. It is in English so that the students cannot use words from the prompt in their conversation, which wouldn’t be an accurate reflection of how well they retained the vocabulary from the lesson/semester. 3. It gives them enough information so that if they don’t know how to talk about one of the things, there are other things they could talk about.

How do you evaluate these assessments?

It’s really important to keep your goals for assessment in mind when you’re evaluating. It’s also important to be consistent in how individuals are being evaluated. To that end, I love me a good rubric.

When you have your goals in mind, you can create buckets in a rubric that will represent how well the students have met your goals for them and for the assessment. They also cut down your grading time when they’re well-written.

You might think that a rubric may not allow for individual differences, but that’s not true. If you’ve written your rubric well, they’re generic enough to allow you to evaluate the person as an individual (e.g. Did they improve a lot or a little over the course of the semester?) but also specific enough that the goals of the assessment are met.

How does this translate to tech interviews?

I think this translates really well! When we are vetting candidates for roles, we often know what we want in the new hire. For example, is this a junior or a senior role? What development skills do we want? Do we want a UX Researcher or someone stronger in visual design to fill this role?

The type of role we are filling will drive the goals for the assessment. For a developer, we want to test their skills in the languages they will be expected to use in this role. For a visual designer, we will probably lean heavily on their design portfolio. For a researcher, perhaps we want to see a case study or see how they design a study or an assessment.

So then we should be creating a contextualized, realistic task for them to do in order to evaluate the skills we would like the new hire to have. However, realistic doesn’t mean “have candidates solve an existing problem we have for free because we’re fresh out of ideas.” This is exploitative and you should not do this. Instead, think of a real world problem that could actually be a design or development problem they may encounter in the future, whether at your business or another. Something generic enough that even if the candidates don’t have direct industry experience, they may have personal experience that would inform how they would solve the problem.

You could create a rubric to evaluate performance or portfolio, so that there’s consistency across evaluators and candidates. Rubrics can be great conversation-starters for the team, as everyone is looking at the portfolio or performance through a similar lens, and can help bring up questions about things that are awesome or lacking about the portfolio or performance.

You still need to pilot and iterate

Just like with designs, we need to test out our materials to make sure they are sound and produce the results you want. Years of assessment design has taught me that people are awesome and weird and will interpret things differently than you. (You are not your user, right?) It’s helpful to pilot your materials with someone who is not directly related to your team, but not a candidate for the role either, just to figure out what kinks need to be worked out before “going live” when the stakes are higher and you need to fill a role. Even I need to test out my assessment materials, and I am pretty darn good at assessment design. You won’t get it right the first time. It takes a lot of practice to design good, solid assessments.

I also want to add that you need to keep accessibility (both from a disability as well as simple access point of view) front of mind when designing assessments, but that’s something I will address at a later time since it deserves its own post. Accommodations are important to consider.

If you’re interested in talking with me about how I can help you/your team design skills assessments, get in touch! Find me on LinkedIn or email me at penn0055 at umn dot edu.