SECON is Capital One's internal software conference. It is held once a year (it's been done remotely during the covid-19 pandemic) and has a good number of speakers from across the company. I have spoken before at SECON, but this year I did something different: I was one of the people invited to review proposals to select which talks to accept. I wanted to share what I learned, give advice on how to prepare a talk that will get accepted, and close out with some ideas for how we might do a better job of selecting talks in the future.
How Talks Were Reviewed
SECON has about 6 different tracks; I was one of 5 reviewers for the "Software Engineering" track. We received a list of 217 talk submissions for our track (plus an extra 24 submission that didn't fit in any track), of which we were expected to select about 45 to accept to the conference, plus a few extras as alternates.
The most defining feature of this proces is that we were operating from far too little information. Each applicant submitted a talk title, a brief (about 1 paragraph) summary of the topic, and a brief (1 paragraph) statement of their experience, the topic's relevance, and their experience in presenting. So the only thing we were working from during the reviews was 2 paragraphs of description.
The process we followed was simple. We "blinded" the reviewers (using the low-tech approach of simply hiding the columns with applicant name and information), then each reviewer went through every submission and ranked them. We added up the rankings across reviewers to select our top choices. After that, we double-checked the results a few different ways. Where we ended up with too many accepted talks on a specific subject we would invite presenters to merge two talks or just reprioritize to include some neglected topics. We also unblinded and checked for diversity of speakers. If we had ended up with, for example, 44 men and one woman speaker or 41 speakers that were of level "Principal Engineer" or higher and only a handful that were of lower levels we would have wanted to at least be aware of it before submitting the list. As it turned out, the original list needed few changes.
How To Get Your Talk Accepted
Of course each reviewer had their own interests, quirks, and opinions -- that is a major reason why we chose to use several independent reviewers and aggregate their ratings. But I can tell you what appealed -- and didn't appeal -- for me.
The topic made a lot of difference. Some topics I've been interested in recently, like serverless or canary deployment, received a boost. Meanwhile, just a few weeks after a formal announcement that Capital One will be reigning in our "anything goes" policy on programming languages and trying to stick mostly to about 6 different languages for our development, talks about new programming languages were knocked down a few levels. And an unusual topic was always an advantage; I gave high marks to the talk on applying the speaker's AWS development knowledge to a real-life medical issue in their family because it was interesting but also unlike anything else that was proposed.
The other thing that made a big difference was any indications that the talk would be interesting. This is hard to gague based on a one-paragraph summary! But the summaries that were one sentence long, were full of buzzwords, or sounded like corporate-speak lost out to proposals on similar topics that included an example of a surprising conclusion or controversial position that the talk would address.
A huge percentage of the talks proposed (more than half) fell into one of two categories: "Here's how my team built our new system." or "Let me tell you about our new system because everyone should use it." Both of these formulations began with a few strikes against them, although a few of them made for great talks. The audience probably isn't interested in the architecture your team used unless you learned some interesting things in the process or tried some new approach and those may apply to other projects. And it is almost universally true that we would all be better off if we only use one system to do X (except for all the people for whom that system doesn't meet their needs) -- but that's not a reason to use YOUR system: tell us what yours can do that the existing solutions can't, and then everyone will choose to move to using your solution.
How We Could Do Better
After this experience, I have a few ideas about how we could do this better for next year's conference. Of course, "better" depends on what our goal is. I would focus more on delivering a better set of talks for the audience, although there are other valid goals like helping people develop presentation skills or showcasing practices we want to encourage across the company.
But if the goal is to deliver better talks, then the existing system has a major flaw: one's ability to write an interesting one-paragraph summary of a talk has very little to do with the ability to deliver a captivating and educational talk to an audience. I would like to have potential speakers submit a 2-3 minute video of themselves presenting.
I am not proposing that people prepare their whole talk... lots of folks won't want to invest the significant time required to create a full talk until after they know it will be accepted. But a couple of minutes speaking to one slide that might end up in their talk is a much lower bar. I also realize that the video clips would make it impossible to blind the reviewers as we did. So I think two passes (one with blinded text descriptions and a second pass with video clips) might make sense.
The clips would provide a way for the presenter's speaking skills and their enthusiasm for the topic to figure in to the acceptance process. And I believe those are of the utmost importance to the audience.
(One final note: there are many people with far more experience than me at running medium-sized conferences. It would also be wise to take into account industry best practices.)