Running a CFP

Our second Serverless Days conference in London happened this summer (ok, it was our first, but the first one was called JeffConf London and this was the follow on with a name change) and we wanted to let you know how we ran the CFP, as it was a little different.

A little bit about the process of choosing the talks.

We spend a lot of time trying to ensure that we keep our conferences diverse. Diversity is not a straight forward concept, and is often in the eye of the beholder rather than a simple binary concept. It’s not an easy task therefore and our industry as a whole is not very good at identifying and celebrating diversity.

So that we as an organising committee of 5 (4 technical people and 1 logistical), could avoid bias in our CFP selection process, we anonymised the CFP. This was relatively difficult, as all submissions are named and contain identifying information. So, one of our organising committee (me) was selected to act as the “anonymiser” for the other technical members (Ant Stanley, James Thomas, Simona Cotin) of the organising committee. The anonymisation process was relatively simple:

  • Give each talk a unique non-sequential code
  • Remove all information around the person’s name, email address and date of posting
  • Review each submission and remove any mention of the company or any identifiers (e.g. he/she/them or links to other talks that the person has given) unless the company was required to explain the talk content — e.g. an implementation of a technology at a company is different to a talk on a technology from a person who works at a specific company.

This was an imperfect process, but it mostly worked. The submission review process was relatively simple after that. The reviewers would give a mark out of 5. Any mark below 2 from any reviewer was automatically rejected as either out of scope for the conference, or not of sufficiently high quality to be considered. There were only a small proportion of talks actually rejected by reviewers.

Any talks that were recognised by the reviewers as coming from their company or a company that was a customer (e.g. James Thomas — an IBMer — reviewing a recognisably IBM talk) then an impartial reviewer was brought in to provide the review for that talk. This worked really well, and provided a simple way to avoid another bias where a vendor could be seen to over-promote their talks at the conference. It’s also important to make sure that reviewers come from a variety of backgrounds, and technologies so as not to skew the talks e.g. all reviewers only build with Lambda.

In this way, all talk submissions were reviewed. Then all talks were ranked according to average review score (still anonymised). After the ranking was completed, we reviewed the top submissions in each format — 30 minute, 10 minute, 5 minute — and looked at our approximate schedule based only on ranking.

Then we iterated over the agenda using some criteria:

  • If subjects were too similar we kept the top ranking talks but downgraded the lower ranked talks.
  • Where someone had two (or more) talks accepted (only the anonymiser knew if this was the case), we had to request they choose a single talk or put one talk into the “backup” section
  • Where similar talks meant that a really good talk was being rejected and we thought the talk could still be good in a different format (e.g. 30 minute to 10 minute) we put a request in that they changed their talk to a different format, but put them into that part of the agenda
  • We ensured that we kept a few backup talks in each format (based on rankings) as there are always speaker drop outs due to cancellations and illness. Also, as some speakers would need to agree a format change, then this would provide a pool to choose the next talk by rank, if that would not be possible.

The process was iterative and still used the anonymised data, and took several runs through with the whole team, including moving some talks around into different formats.

Outside of the CFP process, there were also invited speakers, as we believed their content is worth sharing with the community.

This is how we reached the final schedule. it had 4 women, 11 men and many different countries and ethnicities covered within 14 talks (there was 1 joint talk).

It’s still an imperfect process, but it seemed to provide a better way of judging the content of a talk whilst attempting to avoid as much bias towards a company or gender. Hopefully this will encourage more diverse people to propose talks to our CFPs in future as well.