Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add survey results #1520

Merged
merged 17 commits into from
Jun 5, 2024
Merged

Add survey results #1520

merged 17 commits into from
Jun 5, 2024

Conversation

janhohenheim
Copy link
Collaborator

@janhohenheim janhohenheim commented Jun 3, 2024

@janhohenheim janhohenheim mentioned this pull request Jun 3, 2024
content/blog/survey-02/index.md Outdated Show resolved Hide resolved
content/blog/survey-02/index.md Outdated Show resolved Hide resolved
@janhohenheim janhohenheim requested a review from Vrixyz June 3, 2024 07:32
Copy link
Collaborator

@mamaicode mamaicode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks smooth

content/blog/survey-02/index.md Outdated Show resolved Hide resolved
content/blog/survey-02/index.md Outdated Show resolved Hide resolved
content/blog/survey-02/index.md Outdated Show resolved Hide resolved
content/blog/survey-02/index.md Show resolved Hide resolved
content/blog/survey-02/index.md Show resolved Hide resolved
content/blog/survey-02/index.md Show resolved Hide resolved
Copy link
Member

@AngelOnFira AngelOnFira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few items, but overall great work!

content/blog/survey-02/index.md Show resolved Hide resolved
- Readers do not want anything in the newsletter generated by AI
- Contributing to the newsletter could be easier. If you've got ideas on how to make this happen, please [let us know](https://github.com/rust-gamedev/rust-gamedev.github.io/issues/1519)!

We will now go through the results in the same order as the questions were asked. The full analysis and data is open-sourced on [GitHub](https://github.com/janhohenheim/rust-gamedev-statistics/tree/main/jan-hohenheim-2024).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to check, how clear was it to participants that their data would be made public?

Copy link
Collaborator Author

@janhohenheim janhohenheim Jun 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not at all. However, this data does not include any identifying information. I recorded no email address, age, gender, employment, location, name, project affiliations, or anything else.
The only thing that might come close is the free-text feedback form, but no one wrote anything remotely private in there.
Coming from academia, this kind of data collection usually does not require consent to open-source the results.

content/blog/survey-02/index.md Outdated Show resolved Hide resolved
@janhohenheim
Copy link
Collaborator Author

Answered all the feedback. @AngelOnFira, ready for another round :)

@junkmail22
Copy link
Contributor

Mirroring my comments from Discord to here.

Of note is that the verbal feedback we got indicated that a lot of readers did not fully understand what exactly was being proposed. People worried that we would start generating a majority of the newsletter or entire sections with AI, which is not something any of us wants. Some readers also thought we already started using LLMs. The actual idea was to use AI to generate summaries of articles that were already hand-picked by the editors but not summarized yet because of time constraints. The summaries would then be edited and verified by the editors. The extent to which AI would be used would be limited to up to two sentences per late article. Any confusion in this regard is our fault. We will try to be more clear on such questions in the future.

Although this misunderstanding might have skewed the results, we have reason to believe that the effect is not too large. Anecdotally, when we properly explained the proposal to readers who were against it, most did not change their mind and cited more principled reasons for their stance. Chief among these was solidarity with the large number of creatives who recently lost their jobs due to AI-generated content, inside and outside the game development industry.

I don't think this is a particularly fair or accurate summary of the situation.

In particular, it frames the negative feedback as coming from people who didn't understand it in the first place - I, as well as many others, understood that the proposal involved AI summaries of other articles. In addition, I think that saying that all pushback received was moral is very misleading when a number of practical objections were raised, and that the primary source of pushback was not job loss.

Anecdotally, I feel that the article attempts to frame pushback on LLM content as coming from pearl-clutching luddites, who did not actually understand the issue at hand, when in reality, the majority of pushback came from people who understood what was being proposed very well. I think this is neither fair, nor accurate.

While a majority of readers (65%) are at least okay with AI-generated summaries, a significant minority (35%) are not okay at all with this proposal.

This is burying the lede that negative responses (Not Okay) far outnumber positive responses (Good, Love). Furthermore, the fact that there were two positive responses and only one negative response will likely skew results in the positive direction. You can't just lump the neutral responses (Okay, Don't Care) in with the positive responses in your analysis. It is, at the very least, misleading.

I'm a bit frustrated by the article in many ways. I think it's inevitable that, when the people doing the review of the survey were the people pushing for the thing that everyone told them not to do in the survey, fair and accurate explanation of the reasons people disagreed with the idea are kind of impossible.

One final point:

The actual idea was to use AI to generate summaries of articles that were already hand-picked by the editors but not summarized yet because of time constraints. The summaries would then be edited and verified by the editors. The extent to which AI would be used would be limited to up to two sentences per late article.

If the LLM summaries are a relatively small part of the newsletter, and opposition to including them is so vocal and widespread, it should, at this point, be a dead and buried conversation.

@janhohenheim
Copy link
Collaborator Author

Some points to address:

  • "the people doing the analysis" is just me in this case. If there's anything wrong with the survey, analysis or summary, that is entirely my fault and no one else of the working group or involved in the newsletter should be blamed.
  • I did my best to be objective for the raw data. "Most people are at least okay with AI" is a fact. "Most people would prefer to not have AI" is also a fact. I mentioned both. The colors I used in the plot are chosen intentionally so that the "okay" option is orange, not green.
  • The summary of reasons people had might not be true for you, and I respect that. This is entirely a summary of the feedback I personally got in private conversations, from the survey answers and in public discussions. There is nothing wrong with people mostly not wanting to have AI in the newsletter out of principle. I personally disagree with it, but I would not call anyone a "perl-clutching luddite" over it. People have different ethical frameworks, that's okay. In the end, it doesn't matter much why readers might be opposed to AI, it matters more that they are. I included my summary because I probably have received more messages concerning the topic than anyone else, so it seemed interesting to summarize them.
  • "If the LLM summaries are a relatively small part of the newsletter, and opposition to including them is so vocal and widespread, it should, at this point, be a dead and buried conversation.": I do not wish to chose anything based on who happens to be the most vocal over an issue. If we had an army of AI techbros in this server, it would not make it better to use LLMs either. It is my belief that a survey was the right thing to do. Considering that I now know that most readers would prefer AI not to be involved, I want to respect that wish.
  • The questions about AI, excitement and ease of contribution did not have optimally phrased answers. In hindsight, I should have used a Likert scale, which is a widespread standard for ordinal data. No sleight of hand here, I just never worked much with ordinals.

@janhohenheim
Copy link
Collaborator Author

The last two commits address criticisms raised by people who did not feel represented by the summary given. I hope it's better now :)

@janhohenheim janhohenheim merged commit 21ee91e into source Jun 5, 2024
1 check passed
@janhohenheim janhohenheim deleted the survey-2 branch June 5, 2024 17:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants