The company will also open an application process to allow more people working in academia, civil society and journalism to join the Twitter Moderation Research Consortium, a group that Twitter formed in pilot mode earlier this year and has access to the datasets.
While researchers have studied the flow of harmful content on social platforms for years, they have often done so without direct involvement from social media companies.
During a briefing with reporters, Twitter said it hopes the data will lead to new types of studies about how efforts to fight online misinformation work.
Twitter has already shared datasets with researchers about coordinated efforts backed by foreign governments to manipulate information on Twitter.
The company said it now plans to share information about other content moderation areas, such as tweets that have been labeled as potentially misleading.
Earlier this week, Twitter announced expanding how it recommends posts from accounts that users do not follow. Twitter is looking forward to build tools for users to control and provide feedback on that content.
“With millions of people signing up for Twitter every day, we want to make it easier for everyone to connect with accounts and Topics that interest them,” Twitter said in a blog post.
For the expansion, Twitter is testing an “X” tool which can be used to remove recommended tweets that users do not wish to see on their timelines.
Twitter’s Competitor Meta Platforms also disclosed in July that it is planning to double the percentage of recommended content that fills its users’ feeds on Facebook and Instagram by the end of 2023.