Wikidata:Requests for permissions/Bot


Wikidata:Requests for permissions/Bot
To request a bot flag, or approval for a new task, in accordance with the bot approval process, please input your bot's name into the box below, followed by the task number if your bot is already approved for other tasks. Then transclude that page onto this page, like this: {{Wikidata:Requests for permissions/Bot/RobotName}}.

Old requests go to the archive.

Once consensus is obtained in favor of granting the botflag, please post requests at the bureaucrats' noticeboard.


Bot Name Request created Last editor Last edited
MONAjoutArtPublicBot 2025-06-18, 08:05:03 Anthraciter 2025-06-18, 08:05:03
Dr.üsenbot 2025-06-11, 11:50:05 Dr.üsenfieber 2025-06-13, 14:41:16
Wikidata_Translation_Bot 2025-05-26, 15:13:33 Matěj Suchánek 2025-06-01, 12:37:04
GTOBot 2025-04-24, 08:47:00 SoerenWachsmuth 2025-05-12, 12:37:18
KlimatkollenGarboBot 1 2025-02-20, 09:19:38 Ainali 2025-04-22, 07:51:17
PWSBot 2024-12-12, 12:47:12 Wüstenspringmaus 2025-02-12, 06:33:44
QichwaBot 2024-09-25, 17:03:35 Elwinlhq 2025-04-03, 12:49:31
Leaderbot 2024-08-21, 18:17:53 Lymantria 2024-09-10, 17:29:51
UmisBot 2024-07-25, 16:44:40 Wüstenspringmaus 2025-02-22, 17:21:51
DannyS712 bot 2024-07-21, 03:09:22 Ymblanter 2024-07-26, 04:29:22
TapuriaBot 2024-06-03, 16:18:28 محک 2025-03-26, 13:08:26
IliasChoumaniBot 2024-06-03, 10:16:37 IliasChoumaniBot 2024-07-18, 11:01:28
Browse9ja bot 2024-05-16, 02:16:04 Browse9ja bot 2024-05-25, 13:12:09
OpeninfoBot 2024-04-16, 11:14:27 Wüstenspringmaus 2025-02-15, 09:17:52
So9qBot 9 2024-01-05, 18:41:06 So9q 2025-02-19, 10:03:38
So9qBot 8 2023-12-17, 15:07:59 So9q 2025-02-19, 10:12:17
HVSH-Bot 2023-12-31, 12:37:18 Wüstenspringmaus 2025-06-13, 17:21:32
RudolfoBot 2023-11-29, 09:29:38 TiagoLubiana 2023-11-30, 23:47:22
GamerProfilesBot 2023-10-05, 11:06:23 Jean-Frédéric 2024-05-19, 07:39:50
MangadexBot 2023-08-06, 18:01:17 Lymantria 2025-02-09, 08:12:19
WingUCTBOT 2023-07-31, 10:07:51 Wüstenspringmaus 2025-03-15, 14:54:58
MajavahBot 2023-07-11, 19:54:55 Wüstenspringmaus 2024-08-29, 11:05:24
FromCrossrefBot 1: Publication dates 2023-07-07, 14:31:17 Wüstenspringmaus 2025-03-15, 15:00:05
AcmiBot 2023-05-16, 00:36:49 Wüstenspringmaus 2025-03-15, 14:58:15
WikiRankBot 2023-05-12, 03:36:56 Wüstenspringmaus 2025-02-18, 11:37:57
ForgesBot 2023-04-26, 09:30:12 Wüstenspringmaus 2025-02-15, 19:09:57
IngeniousBot 3 2023-03-22, 16:29:58 Wüstenspringmaus 2025-02-17, 12:48:11
LucaDrBiondi@Biondibot 2023-02-28, 18:25:03 LucaDrBiondi 2023-03-31, 16:10:37
Kalliope 7.3 2022-12-07, 09:16:20 Wüstenspringmaus 2025-02-16, 17:22:37
DL2204bot 2 2022-11-30, 11:19:21 Wüstenspringmaus 2025-03-17, 12:30:34
Cewbot 5 2022-11-15, 02:20:05 Kanashimi 2025-02-15, 12:52:53
Mr Robot 2022-11-04, 14:09:41 Wüstenspringmaus 2025-05-05, 11:37:33
YSObot 2021-12-16, 11:33:29 So9q 2024-01-02, 10:32:27
PodcastBot 2022-02-25, 04:38:31 Iamcarbon 2024-10-16, 21:26:09

MONAjoutArtPublicBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Anthraciter (talkcontribslogs)

Task/s: Create Wikidata items for public artwork and artists in Québec, Canada via user input with seed information from the MONA public art database when available.

Code: (work in progress)

Function details:

Facilitate creation of Wikidata items for public artworks and artists with pre-populated suggestions from the MONA public art database

Transform user inputs into appropriate format for Wikidata items

Check that each proposed item is not a duplicate before adding to Wikidata

--Anthraciter (talk) 08:05, 18 June 2025 (UTC)[reply]

Wikidata Translation Bot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)

Operator: Jotechnet (talkcontribslogs)

Task/s: Automate the translation of Wikidata item labels and descriptions across supported languages and submit them using the official Wikidata API.

Code: GitHub - wikidata-translator

Function details: The **Wikidata Translation Bot** is designed to support the multilingual enrichment of Wikidata items by translating and submitting labels and descriptions using an automated pipeline. The bot performs the following tasks:

  • Reads a list of QIDs (Wikidata item identifiers).
  • Fetches source labels and descriptions.
  • Translates content into the target language using a supported translation backend.
  • Authenticates using OAuth (in line with Wikidata bot requirements).
  • Submits updates using the `action=wbeditentity` API.
  • Implements rate limiting and retries to comply with editing guidelines and avoid spamming.
  • Logs all responses for auditing and debugging.

Key Safeguards:

  • Prevents overwriting if the label or description already exists in the target language.
  • Validates language codes and input data before submitting edits.
  • Includes throttling and error recovery mechanisms to respect API usage limits.
  • The bot will initially run under supervision, focusing on high-priority or underrepresented languages.

---

Bot details: User:Wikidata Translation Bot (talkcontribsnew itemsnew lexemesSULblock loguser rights loguser rightsxtools)

Operator: User:Jotechnet (talkcontribslogs)

  Comment The user account does not exist.
  Comment The "Code" link results in 404.
  Comment using a supported translation backend Could you please specify this more? Also, have potential copyright issues been considered?
  Comment focusing on high-priority or underrepresented languages Can you please specify some of these languages? Do you cooperate with their speakers and/or wikis? --Matěj Suchánek (talk) 12:36, 1 June 2025 (UTC)[reply]

GTOBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: SoerenWachsmuth (talkcontribslogs)

Task/s: Creating new datasets for people or places related to the project https://gestapo-terror-orte.de/

Code: In progress. Coming soon. The code will be available in the open GitHub Repo: https://github.com/TIBHannover/ogt-web-map

Function details: The project https://gestapo-terror-orte.de/ will get a form where users can create new places or people related to the Gestapo-terror in Lower-Saxony Germany. The added content will be reviewed before it gets pushed with the bot to wikidata. Following data can be created: New victims/perpetrator. The users can use the form to add the name, birthdate, and other data to the person and then submit it to the internal database of the project. (Example of a victim dataset: https://www.gestapo-terror-orte.de/map?pid=Q133567335&group=victims) New places related to the gestapo terror. New prisons, events, memorial places. In the form you can also add more informations about the time that the event took place, some descriptions and more informations that you can see for example here: https://www.gestapo-terror-orte.de/map?id=Q106625639&group=statePoliceHeadquarters&lat=52.3664978&lng=9.7321152 Its required to add citation to prove the created data, that will be checked. After a review proccess the admin can allow or deny the request. By allowing it the bot should create the new dataset and include the data. After that the new data would be shown on the map.

This process is still in progress since the developing phase just started. So I would edit here when things change. --SoerenWachsmuth (talk) 08:47, 24 April 2025 (UTC)[reply]

Could we get an approval to get the rights for the test domain, so that we can test our code accordingly?
Because it seems it doesnt work so far.
Thank you very much. GTOBot (talk) 07:16, 30 April 2025 (UTC)[reply]
I just transluced the page, so that people will find this request.   Notified participants of WikiProject Victims of National Socialism. Samoasambia 14:08, 30 April 2025 (UTC)[reply]
@SoerenWachsmuth Ich muss ehrlich sagen, ich verstehe das Konzept nicht ganz: Ihr baut eine Datenbank auf, die aber nicht standalone funktionieren soll, sondern hauptsächlich mit dem Ziel, die Daten dann in Wikidata einzutragen? Aber ihr wollt das mit einem Bot machen, obwohl gar keine automatischen Edits angedacht sind, sondern alles approbiert werden muss? Wie macht ihr die Dublettenkontrolle? Warum sind die Beispiel-Datenobjekte Adam Jaschke (Q133567335) und Q133567355 voller constraint violations und ohne jede Belegstelle, in weiterer Folge auch Datenobjekte wie Q108127321 mit ähnlichen Problemen und sogar einem Tippfehler? Seid ihr euch mit der Modellierung von Q133567355 sicher, wurde die offensichtliche WD:N-Problematik schon irgendwo diskutiert? Wenn, wie ich aufgrund der öffentlichen Förderung annehme, Geld fließt, warum gibt es keine entsprechende Offenlegung? --Emu (talk) 14:51, 30 April 2025 (UTC)[reply]
@Emu Also die Daten sollen von den NutzerInnen über ein Formular eingetragen werden. Dort kann man dann auch nach bereits vorhandenen Einträgen suchen, sodass keine Dubletten entstehen. Wenn die NutzerInnen dann die Daten speichern, werden die zuerst in unsere DB gespeichert und dann von unseren Admins kontrolliert. Wenn diese dann die Daten freigeben soll der Prozess angestoßen werden, der die Daten dann an Wikidata überträgt. Wofür der Bot dann benötigt wird, gehe ich von aus. (Oder gibt es Wege das ohne einen Bot zu tun?) In der API die mir vorliegt muss man sich anmelden als Bot.Wir hatten in den letzten Wochen mehrere Workshops in denen Daten in Wikidata angelegt wurden und wir sind dabei die Einträge zu korrigieren, auch die Zitatfehler, bzw. fehlende Angaben. Mit dem Formular könnten wir dann auch dem entgegenwirken das Zitationen fehlen, da diese dann überall an die passenden Stellen eingebaut werden. SoerenWachsmuth (talk) 12:31, 12 May 2025 (UTC)[reply]
Um nochmal auf das Grundkonzept einzugehen. Es handelt sich um ein Citizen Sciene Projekt in dem die Orte des Gestapo Terrors erfasst werden. Die Daten sollen öffentlich verfügbar und einsehbar sein und Menschen soll es ermöglicht werden bei der Erfassung mitzuwirken. Mehr kann man auch hier nachlesen. https://www.gestapo-terror-orte.de/projekt
Bzgl. der Förderung Offenlegung frage ich nochmal nach. Grundsätzlich ist das Projekt von der Stiftung Niedersächsische Gedenkstätte gefördert in Kooperation mit der TIB (Technische Informationsbibliothek Hannover). SoerenWachsmuth (talk) 12:37, 12 May 2025 (UTC)[reply]

KlimatkollenGarboBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Klimatfrida (talkcontribslogs) NL-Moritz (talkcontribslogs)

Task/s: upload carbon footprint data from Klimatkollen to Wikidata

Change Frequency: Changes are only pushed when new reports are passed, this mostly happens in Q2 of each year. There will be one possible change per report per company.

Code: https://github.com/Klimatbyran/ Developed in fork: https://github.com/Klimatbyran/garbo/compare/main...okis-netlight:garbo:feat/wikidata-update

Function details: After carbon footprint data is verified by a human from Klimatkollen, the bot will push this data to the corresponding company entity's carbon footprint section on Wikidata. If there is already data in this section, for specific reporting period and scope, the bot will update this data, and otherwise create a new data point. --KlimatkollenGarboBot (talk) 09:19, 20 February 2025 (UTC)[reply]

Looks good. @Klimatfrida, can the bot make around 50 test edits so that we can verify that it is working as intended? Ainali (talk) 08:34, 21 February 2025 (UTC)[reply]
Yes, we are trying this today. :) Klimatfrida (talk) 08:57, 21 February 2025 (UTC)[reply]
We did the requested edits, also we underestimated the number of new datapoints for AstraZeneca a bit, so the bot did a bit more than 50 edits, we hope that is not a problem. NL-Moritz (talk) 12:58, 21 February 2025 (UTC)[reply]
While the result looks good, those 69 edits should be grouped into one single edit. Please rewrite the bot to make it easier to follow. Ainali (talk) 14:05, 21 February 2025 (UTC)[reply]
For reference, it is the API function wbeditentity that can make bundled edits to the same item. Ainali (talk) 19:01, 21 February 2025 (UTC)[reply]
Thanks for the input, I implemented this change. The only thing is that we have to do the edits for scope 1 + 2 and scope 3 in two groups as scope 3 has an additional qualifier and the library does has problems if this qualifier is marked to be compared when it is actual missing for scope 1 and 2. Currently, I have grouped the edits for each scope, but will combine 1 + 2 shortly and hope that this is a good solution. NL-Moritz (talk) 12:40, 24 February 2025 (UTC)[reply]
Sure, let us know when you have implemented the changes and have made a few more test edits with the new implementation. Ainali (talk) 15:12, 24 February 2025 (UTC)[reply]
@Klimatfrida @NL-Moritz: Something is going wrong. The bot has added a second set of statements on Inter IKEA Holding, duplicating the existing ones, see this combined diff. Please clean that up and fix your code before doing any further edits. Ainali (talk) 16:04, 24 February 2025 (UTC)[reply]
Thanks for making us aware. The bot itself works correctly in the sense that it does not create duplication's in the sense of the same data for the identical time period, scope and category. But, the error showed an issue in our data as the values of the datapoints are verified, but the reporting period in some cases was not. We will directly tackle the issue to solve it asap. I will clean up the wrong datapoints in the meantime. Is it okay if we go for another test run, when we checked our data for this error, as I think the bot itself functions correctly? NL-Moritz (talk) 16:31, 24 February 2025 (UTC)[reply]
The IKEA Holding is cleaned up, I removed the duplicated values and changed the reporting periods for new datapoints that were not there before and also verified everything using the latest report. NL-Moritz (talk) 17:11, 24 February 2025 (UTC)[reply]
I am not sure what you mean by it is not duplicates. When looking at, for example, these two added statements at H&M (Q188326) they look exactly the same to me. 1: Q188326#Q188326$55F6B2DF-4571-477F-9DE1-E24432D72F5C, and 2: Q188326#Q188326$FE3EBB72-A414-4463-9409-89D9660B5909. (This item also needs cleaning.)
But to your last question, yes, but please make only a few edits one at a time to verify that any error (regardless if it is in the code or the data) is not propagated too far and in a scale small enough to clean up. Ainali (talk) 19:15, 24 February 2025 (UTC)[reply]
Ok we did some more testing and feel now quite comfortable that the bot works as intended with just a single edit per entity. We also restricted the data we want to upload to the most recent one (last reporting period) as we currently don't know if the community wants all of the historic data or not. This restriction leads to the problem that our bot currently cannot add anything new in most cases, so finding entities to make edits is a bit hard. I did run one successful edit on Holmen AB Q1467848. The thing is that there is a statement which shows the carbon footprint for two different scopes at once (Scope 1 and Scope 3) as the value is the same. I personally find this statement ambiguous and would prefer the separate statements our bot added, it also shows that the bot relies on the qualifiers to distinguish between different statements and cannot detect these special cases. As it is quite hard to make test runs in the live system as most of the recent data is already there, I also did some in the sandbox https://test.wikidata.org/w/index.php?title=Q238638&action=history if this is viable. NL-Moritz (talk) 07:53, 28 February 2025 (UTC)[reply]
Thanks for pondering the conundrum with multiple timepoints! Indeed, all historical data may be a bit much for now (as there is a hard limit of the size of an item). For that, we should rather look into storing the data as .tab files in the Data namespace on Wikimedia Commons. However, just adding one new year at a time going forward should be fairly safe, as the growth rate is very limited.
On Holmen AB (Q1467848), I believe it was this edit going wrong in an OpenRefine batch: https://www.wikidata.org/w/index.php?title=Q1467848&diff=next&oldid=2180340963 and I think the "Scope 3" qualifier can just be removed there.
Yes, it is certainly viable to do test edits there when there are no current updates to make here. The edits there look good. I can't see any duplicated statements either, so I guess you have checked for that, is that correct? Ainali (talk) 09:48, 28 February 2025 (UTC)[reply]
Yes so I do a comparison between the items already in the carbon footprint statements and the items we have. Items describe the same datapoint if the start and end date of the reporting period and the scope are equal. Additionally, for scope 3 the category also has to be the same. If there is a match, I check if the value is the same, if so no update is done, if not I update the value. If I find items of ours that don't have a match to on of the existing items I add this item to the statement. NL-Moritz (talk) 10:54, 28 February 2025 (UTC)[reply]
Just a general question regarding the process. Are we currently expected to do more test runs or are there any other requested changes pending on our side? NL-Moritz (talk) 07:23, 10 March 2025 (UTC)[reply]
@NL-Moritz The latest edit from the bot on Systembolaget (Q1476113) looks weird, when all different scope 3 values was replaced by the same one. Was there some change in your code creating this error? Ainali (talk) 13:27, 16 March 2025 (UTC)[reply]
@Ainali I think that was a previous bug and the issue is fixed now. I will look into this edit to fix any faults caused by the bug. Apart from that the newest runs of the bot were done in the sandbox https://test.wikidata.org/wiki/Special:Contributions/KlimatkollenGarboBot as wikidata is pretty much up to date and the bot cannot contribute something new. NL-Moritz (talk) 08:27, 17 March 2025 (UTC)[reply]
@NL-Moritz Ok! Nice that you also added edit summaries. Could it be made more granular, so that it from the summary is clear if it is a pure addition or if existing statements are getting updated, or both (or removed as I saw some edits on test.wikidata)? Ainali (talk) 09:49, 17 March 2025 (UTC)[reply]
@NL-Moritz Slightly related question, but thinking about that you don't have any more edits to do as tests right now, what is the estimated number of edits for the bot per year? Ainali (talk) 09:03, 18 March 2025 (UTC)[reply]
@Ainali a rough estimation would be that we update the data of the companies we have once a year. If with estimate that a company has around 10 carbon footprint datapoints and we currently have around 300 companies, this would lead to 3000 edits a year. But, we want to increase the number of companies in the future, so the number of edits will scale with it. NL-Moritz (talk) 09:24, 18 March 2025 (UTC)[reply]
Thanks for the estimation! Even with a ten or hundred fold increase, it seems like a reasonable amount. Ainali (talk) 09:28, 18 March 2025 (UTC)[reply]
I will look into this to make clear which data points were newly added and which was just replaced with newer data. NL-Moritz (talk) 09:26, 18 March 2025 (UTC)[reply]
@Ainali Had a bit of a deeper look in to the summary. Initial I thought I can add a change summary to every claim, but as I do the change in one edit that is not possible. Therefore, do you want me to split the edits up into one for additions, one for updates and one for removals or should I try to write a longer change summary which covers everything in one edit. NL-Moritz (talk) 12:30, 20 March 2025 (UTC)[reply]
@NL-Moritz I don't think that is needed, if the edit is a combination of additions and updates, the current summary is fine. I was more thinking about the future, when there might be "pure" additions. Ainali (talk) 17:02, 20 March 2025 (UTC)[reply]
@Ainali After adding a claim for the total emissions to the majority of the companies we track. I noticed that there are still a few gaps in the emissions data of some companies. I would love to fill these up with our current data using the bot as after a lot of testing, I feel quite comfortable that with some supervision it should work fine. One thing I am bit unsure of is the removal of older claims. We discussed that the number of claims per entity is limited and therefore only the most recent data should be in the entity. The bot will therefore remove/replace older data. An example can be seen here: https://www.wikidata.org/w/index.php?title=Q47508289&action=history. If this is okay I would go through with updating all companies with our data. NL-Moritz (talk) 12:10, 16 April 2025 (UTC)[reply]
@NL-Moritz Please don't delete already added statements! Especially this edit that claims an update but is pure deletion is not good. There's plenty of room if you just update once per year, the comment about limits were more a safeguard if you would add historical data stretching back a couple of decades, that might reach the limits. Ainali (talk) 14:21, 16 April 2025 (UTC)[reply]
@Ainali Sorry for that, I guess we had a misunderstanding early regarding this then. I reverted the change and will update the bot not remove data from previous years. NL-Moritz (talk) 14:52, 16 April 2025 (UTC)[reply]
Daniel Mietchen (talk) 23:35, 2 October 2019 (UTC) Blue Rasberry (talk) 19:18, 4 October 2019 (UTC) Bodhisattwa (talk) 07:30, 7 October 2019 (UTC) Ainali (talk) 15:33, 7 October 2019 (UTC) MartinPoulter (talk) 14:24, 8 October 2019 (UTC) author  TomT0m / talk page 12:01, 27 October 2019 (UTC) Rajeeb  (talk!) 15:17, 29 November 2019 (UTC) John Samuel (talk) 18:12, 8 December 2019 (UTC) Pauljmackay (talk) 07:56, 31 December 2021 (UTC) Zblace (talk) 05:23, 23 June 2022 (UTC) Bukky658 (talk) 08:31, 28 January 2023 (UTC) Guettarda (talk) 16:32, 31 January 2023 (UTC) Dsp13 (talk) 11:50, 9 July 2023 (UTC) 168.5.41.197 17:33, 28 September 2023 (UTC) Lupe (talk) 15:25, 21 October 2023 (UTC) Guettarda (talk) 02:38, 19 March 2024 (UTC) Samoasambia 09:49, 8 April 2024 (UTC) Marsupium (talk) 23:19, 1 May 2024 (UTC)[reply]

  Notified participants of WikiProject Climate Change as we are using that emissions model. Ainali (talk) 09:59, 28 February 2025 (UTC)[reply]

  Comment I do not see issues in terms of the emissions model but I am wondering about the references. Instead of a reference URL (P854) statement pointing to a PDF of the report, we might want to go for a stated in (P248) statement pointing to an item about the report, with a link to the PDF and an Internet Archive copy. However, I have no idea how diverse the references are that the bot would be citing. If they are all essentially PDFs, maybe the above workflow would be useful. If it's sometimes a PDF, sometimes a URL, sometimes something else, then I'd keep the bot's settings for now. --Daniel Mietchen (talk) 13:20, 28 February 2025 (UTC)[reply]
The reference documents (reports) should all be PDFs so your proposed structure would work. We aligned our structure of the datapoints so far to this model Wikidata:WikiProject Climate Change/Models#Emissions. Implementing your changes would be possible. As the name for the reports I would suggest "<company name> GHG Protocol <year>" to uniquely identify these reports to avoid any duplicates. Regarding the linking of the file, we also want to try to host a copy of every report at klimatkollen.se to ensure that these reports are available long-term and we could also think about a backup of the reports at the Internet Archive. If it is okay for everybody we would do this as an ongoing process with first linking to the original source, as soon as we store a copy at our site to this copy and add a link to a copy at the Internet Archive in the future.
One more thing about the properties in the reference, in the model I referenced before there is also the property determination method or standard with our AI garbo for extracting data from the written reports as a value. Should we also fill out this property and if so, our plan is to only upload data after it is verified by a human so using garbo would not fit that will. Instead, we would need another entity describing this method. Any idea how to call that? NL-Moritz (talk) 13:42, 4 March 2025 (UTC)[reply]
@NL-Moritz If all data will be manually verified, we can just skip the determination method as I modeled it in the model example, because that is then just like how we normally do. I modeled that when I was expecting a totally automated process. Ainali (talk) 14:20, 4 March 2025 (UTC)[reply]
@Daniel Mietchen I am not sure we want to create items for each annual report, that seems excessive. I'd rather keep the reference URLs as is. Ainali (talk) 14:17, 4 March 2025 (UTC)[reply]
@AinaliDo you have input here? Klimatfrida (talk) 11:46, 17 March 2025 (UTC)[reply]
@Klimatfrida I am a bit uncertain what you refer to, since you replied to the input I had. Ainali (talk) 18:02, 17 March 2025 (UTC)[reply]
Hello! I am currently working with @Klimatfrida on developing this bot. I agree with @Daniel Mietchen that it would be excessive to create new items for each report. In my opinion the current way of presenting the report URL is a good start, and later on we could just switch to using URLs that point to some archive if we find that necessary. Oliver-NL (talk) 12:53, 18 March 2025 (UTC)[reply]

Ping @:NL-Moritz Please see this discussion on Swedish Wikipedia about some odd values added in the testing: w:sv:Wikipediadiskussion:Projekt_klimatförändringar#Misstänkt_felräkning/dubbelräkning_av_koldioxidavtryck_inlagda_av_KlimatkollenGarboBot. Ainali (talk) 06:20, 19 April 2025 (UTC)[reply]


QichwaBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Elwinlhq (talkcontribslogs)

Task/s: Creating wikidata lexemes for the Quechua languages

Code: lexeme_upload.py describes the code for creating lexemes for the quechua languages based on a list extracted from the Qichwabase, which is a Wikibase.cloud instance of Quechua lexemes.

Function details: The tasks carried out by the bot include mainly the creation of Lexemes for the Quechua Languages based on the Qichwabase. The lexemes were already modelled according to Wikidata Lexemes model.

A small subset of the lexemes were already imported into Wikidata using the lexeme_upload.py with the support of Kristbaum (talkcontribslogs). Here is one example of a Quechua Lexeme: aparquy/aparquy (L1322219).

Afterwords, a pronunciation audio was added to the lexemes, with the support of the LinguaLibre tool.

Now, I would like to continue this process, by continuing creating Lexemes, so the pronunciation audio for them can be recorded.

Thanks for your support and understanding.

--Elwinlhq (talk) 17:03, 25 September 2024 (UTC)[reply]

Leaderbot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Leaderboard (talkcontribslogs)

Task/s: phab:T370842 and meta:Global reminder bot

Code: https://github.com/Leader-board/userrights-reminder-bot, though this is under development

Function details: See the above phabricator task. It should be noted that

  • I'm submitting near-identical requests on multiple wikis, and
  • I do not expect this bot to run that much (if at all) on Wikidata and will not require a bot flag; however, Wikidata:Bots explicitly mention that approval is needed (and the botflag set which I find unnecessary and even a bad idea), and
  • Here's a test edit (the text will be generalised for Wikidata). Users will be able to opt-out from this using a central page on Meta.

P.S: I also noticed that it says that bot "should never be used to make non-automated edits in the user talk namespace" which my bot will do - not sure if there's a way out of that.

--Leaderboard (talk) 18:17, 21 August 2024 (UTC)[reply]

Can you link examples of temporary rights on Wikidata? Sjoerd de Bruin (talk) 16:34, 26 August 2024 (UTC)[reply]
@Sjoerddebruin: [1], [2] and [3]. As noted above,
  • Wikidata does not make that much use of temporary rights (the flooder right is automatically ignored), and
  • many (but not all) of them are IPBE - some communities prefer that the bot exclude them. In that case it will run rarely, like in the case of the third example I shared above.
Leaderboard (talk) 05:29, 27 August 2024 (UTC)[reply]
I don't understand how a bot flag is needed for a bot that makes "non-automated edits in the user talk namespace"? This may be my confusion... --Lymantria (talk) 17:11, 9 September 2024 (UTC)[reply]
@Lymantria:, the edits are automated, just that the frequency is (very) low. Leaderboard (talk) 08:00, 10 September 2024 (UTC)[reply]
I'd prefer that you go for a global bot account. --Lymantria (talk) 13:00, 10 September 2024 (UTC)[reply]
@Lymantria But global bots are disabled on this wiki (see Meta:Special:WikiSets/14 where Wikidata is in the opt-out set). If there is consensus from the community that global bots should be allowed to run on Wikidata, that's fine by me as well. To reiterate, I don't even need a bot flag in the first place, just approval to run this bot (without one). Leaderboard (talk) 16:29, 10 September 2024 (UTC)[reply]
I'm sorry, you are right. --Lymantria (talk) 17:29, 10 September 2024 (UTC)[reply]

UmisBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Stuchalk (talkcontribslogs)

Task/s: This bot will add string representations of units of measurement to units of measurement Wikidata pages.

Code: The Python project on the "Units of Measurement Interoperability Service" (UMIS), that this bot will support/enable, is at https://github.com/usnistgov/nist_umis .

Function details: String representations of different units of measurement are being aligned to allow translation between different unit representation systems. As the developer of the UMIS, I have concluded that Wikidata is the best place to organize/align unit representation strings. Once available at nist.gov later this year, the UMIS website will offer additional functionality to enable users to programmatically translate between unit of representation systems, and additional functionality is planned. There are already Wikidata properties for some of the unit representation systems (e.g. QUDT) and additional ones will be requested. This is my first bot permission request so if more info is needed please let me know. --Stuart Chalk (talk) 16:44, 25 July 2024 (UTC)[reply]

Please make some test edits. Ymblanter (talk) 20:25, 16 August 2024 (UTC)[reply]
@Stuchalk: reminder to make your test edits. --Wüstenspringmaus talk 12:21, 15 February 2025 (UTC)[reply]
Thanks for the reminder. I am now working on this. How many test edits is reasonable? Stuart Chalk (talk) 08:48, 22 February 2025 (UTC)[reply]
@Stuchalk thank you, circa 50 edits would be great. Wüstenspringmaus talk 17:21, 22 February 2025 (UTC)[reply]

DannyS712 bot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: DannyS712 (talkcontribslogs)

Task/s: I want to get approval for a bot with translation admin rights that will automatically mark pages for translations if and only if the latest version is identical to the version that is already in the translation system, i.e. only pages with no "net" changes in the pending edits.

Code: not yet

Function details: I am filing almost identical requests for bot approval on a bunch of wikis, and figured I should put some of the details in a central location. Please see meta:User:DannyS712/TranslationBot for further info. --DannyS712 (talk) 03:09, 21 July 2024 (UTC)[reply]

@Lymantria @Ymblanter just noting here that I cannot do test edits unless the bot is granted translation admin rights, unless you want me to test under my own account --DannyS712 (talk) 00:57, 26 July 2024 (UTC)[reply]
  Done Ymblanter (talk) 04:29, 26 July 2024 (UTC)[reply]

TapuriaBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: محک (talkcontribslogs)

Task/s: interwiki

Code: interwikidata.py from PAW, Mainly from Mazandarani and Gilaki Wikipedias.

Function details: novice --محک (talk) 16:18, 3 June 2024 (UTC)[reply]

there isn't enough info here. i don't understand what this is doing or how it is doing it BrokenSegue (talk) 15:31, 7 June 2024 (UTC)[reply]

@ محک: Could you please provide more details? --Wüstenspringmaus talk 12:19, 15 February 2025 (UTC)[reply]

I just run one-line code on PAW and my bot check all pages on our local wiki for interwiki links (old system for interwikis) that there aren't connected to Wikidata; so connect them here. محک (talk) 13:08, 26 March 2025 (UTC)[reply]

IliasChoumaniBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Ilias Choumani / IliasChoumaniBot (talkcontribslogs)

Task/s: Automatic updating of data from JSON files on German scientists

Code: Will be in Python (not there yet)

Function details: --IliasChoumaniBot (talk) 10:16, 3 June 2024 (UTC)[reply]

what json files? we need more details BrokenSegue (talk) 15:31, 7 June 2024 (UTC)[reply]
We are students from TH Köln tasked with automating the process of updating data for scientists on Wikidata. Our objective includes verifying the presence of researchers and creating entries if they are not already listed. Similarly, we extend this process to projects, such as those found in GEPRIS, where these researchers have been involved. Subsequently, our goal is to establish connections between these projects and the respective researchers.
Our JSON files contain comprehensive data necessary for expanding information on researchers (QID, name) and their associated projects (project name, project ID) within Wikidata. This ensures that accurate and up-to-date information is seamlessly integrated into the Wikidata ecosystem.
This approach leverages automated tools and careful data handling to contribute valuable knowledge to the scientific community on Wikidata. IliasChoumaniBot (talk) 14:35, 17 June 2024 (UTC)[reply]
What is the ultimate source of the data, where is t published that TH Köln students can access it? Stuartyeates (talk) 19:19, 16 July 2024 (UTC)[reply]
We have the data from various online sources such as gepris, orcid or pubmed. we have exrtahted data from various german scientists and their publications and would like to automatically insert them into wikidata as part of our studies. IliasChoumaniBot (talk) 11:01, 18 July 2024 (UTC)[reply]


Browse9ja bot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Browse9ja

Task/s: Automated data retrieval and updates for Browse9ja project, focusing on Nigerian and African-based information, integrating a chatbot, NLP API, knowledge graph, and machine learning model.

Code: (Not applicable, as I am using a combination of existing APIs and services)

Function details:

The Browse9ja bot is designed to perform the following tasks:

- Retrieve and update data on Wikidata related to Nigerian and African-based information - Integrate with a chatbot to provide users with accurate and up-to-date information - Utilize natural language processing (NLP) API for text analysis and understanding - Contribute to the development of a knowledge graph for African-based information - Apply machine learning models to improve data accuracy and relevance

The bot will operate under the supervision of the operator (Browse9ja) and adhere to Wikidata's policies and guidelines. --Browse9ja (talk) 02:16, 16 May 2024 (UTC)[reply]

  Comment OP has no track record of contributions either here or on any other project.
  Question Can you please give more details of how the chatbot will be integrated? Do you intend to have an LLM suggest content to add to Wikidata? Bovlb (talk) 15:37, 21 May 2024 (UTC)[reply]
Details of Chat-bot Integration as requested: My chat-bot will be integrated into the Browse9ja.com as a bot to provide users with accurate and up-to-date information related to Nigerian and African-based data on Wikidata. The integration will involve utilizing a natural language processing (NLP) API for text analysis and understanding. The Chat-bot will enable users to interact with the Browse9ja bot in a conversational manner, allowing for seamless access to information and updates on Wikidata. Additionally, the chat-bot will play a role in contributing to the development of a knowledge graph for African-based information. While the chat-bot will facilitate user interaction, the machine learning models will be applied to improve data accuracy and relevance, ensuring that the information provided is of high quality and relevance to the users.
About LLM Content Suggestion: The chat-bot integrated with Browse9ja bot will have the capability to suggest content to add to Wikidata. Leveraging natural language processing (NLP) and machine learning models, the chat-bot will be able to analyze user queries and suggest relevant content for addition to Wikidata. This functionality aligns with the broader goal of the Browse9ja bot to automate data retrieval and updates for Nigerian and African-based information, ensuring that the information contributed to Wikidata is accurate, up-to-date, and relevant.
Hope this clarifies my intent and would please also increase my chances for an approval.Thanks alot.
.
Browse9ja bot (talk) 13:12, 25 May 2024 (UTC)[reply]

OpeninfoBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Fordaemdur (talkcontribslogs)

Task/s: importing financial data (assets, equity, revenue, EBIT, net profit) from openinfo.uz to entries on public Uzbek companies in Wikidata.

Code:

Function details: I have a project going with openinfo.uz which is a state-owned public portal for financial disclosures of all public Uzbek companies. All joint-stock companies and banks in Uzbekistan have to disclose their financials there by law. I have created entries for all Uzbek banks at User:Fordaemdur/Uzbek banks and would like to test imports of financial data there (Openinfo is ready to provide API for that). If successful, the bot will import financials once per quarter. Next steps would also be creating entries for all other notable public Uzbek companies, not just banks, and import financials there too. --Fordaemdur (talk) 11:14, 16 April 2024 (UTC)[reply]

How many companies are we talking about? ChristianKl18:57, 17 April 2024 (UTC)[reply]
@ChristianKl, currently there items on about 50 public Uzbek companies (30+ are banks) - all can be found on my userpage. I am planning on creating items for all companies listed at the Tashkent Stock Exchange, so we'll end up with about 150 companies. There are about 600 joint-stock companies in Uzbekistan and I assume at least one third of them is notable. The test will be run on few companies - a mix of banks and corporates, and I don't expect more than 100 edits on a test run. If the test run is successful, the bot will be occupied with populating these items that i'm manually creating rn (checking notability for each individual entry before creating it). Best, --Fordaemdur (talk) 19:17, 17 April 2024 (UTC)[reply]
Add:Openinfo.uz now has an entry to facilitate referencing its data: Unified Portal of Corporate Information Data (Q125505748) --Fordaemdur (talk) 19:19, 17 April 2024 (UTC)[reply]
  Support adding all joint-stock companies is fine given the kind of notability rules we have. If you would want to small businesses as well, it would be a harder call whether or not to allow it. ChristianKl11:48, 18 April 2024 (UTC)[reply]
Thank you for clarification. I confirm that I won't be working on small businesses. Openinfo and Tashkent Stock Exchange (which i'm using for data imports) only have data on joint-stock companies. Best, --Fordaemdur (talk) 14:48, 18 April 2024 (UTC)[reply]
@Fordaemdur: reminder to make your test edits (or do you want to have the discussion closed?) --Wüstenspringmaus talk 09:17, 15 February 2025 (UTC)[reply]

So9qBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: So9q (talkcontribslogs)

Task/s: Add DDO identifier to Danish lexemes.

Code: https://github.com/dpriskorn/LexDDO

Function details: Checks whether there are multiple hits in DDO for a lemma. If yes it is skipped. Checks if there is multiple lexemes with the same lemma and lexical category in WD, if yes, it skips. Otherwise we got a match and upload is done. If we get 404 from DDO a not found in + time statement is added. This is the easiest low hanging fruit kind of matching. I vetted the edits and it seems good to me. See ~50 test edits here https://www.wikidata.org/w/index.php?title=Special:Contributions/So9q&target=So9q&offset=20240105165217--So9q (talk) 18:41, 5 January 2024 (UTC)[reply]

What is this? Ymblanter (talk) 20:01, 11 January 2024 (UTC)[reply]
It is a placeholder. I add it when there are multiple choices for lexemes or no lexeme match like in this case. If they were numbered (by a bot or to-be-written user script perhaps) one could see it as in the second position we don't know which lexeme correspond. So9q (talk) 08:46, 7 October 2024 (UTC)[reply]
Are you still interested in the bot approval? Ymblanter (talk) 18:41, 8 October 2024 (UTC)[reply]
Yes, but I prefer that the community okay it first. Maybe @Fnielsen wants to support? So9q (talk) 15:07, 3 December 2024 (UTC)[reply]
@So9q, Ymblanter: This is fine by me. There is a Mix'n'Match-like tool here https://mishramilan.toolforge.org/#/catalogs/95 that also work on the DDO property. Finn Årup Nielsen (fnielsen) (talk) 21:46, 4 December 2024 (UTC)[reply]


So9qBot 8 (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: So9q (talkcontribslogs)

Task/s: Add missing names of European legal documents to labels and aliases of items with a CELEX identifier

Code: logic diagram, code

Function details: This is important for our coverage of EU legal documents. A bug is blocking creation of 50 test edits.--So9q (talk) 15:07, 17 December 2023 (UTC)[reply]

The bug has been fixed. See test edits So9q (talk) 17:41, 2 January 2024 (UTC)[reply]
@Samoasambia thanks for moving the test edits to title as suggested by the model and Ainali <3 So9q (talk) 08:56, 7 October 2024 (UTC)[reply]

Discussion

edit
  •   Support looks useful, thanks! -Framawiki (please notify !) (talk) 14:34, 6 January 2024 (UTC)[reply]
  •   Question Wouldn't title (P1476) be better than official name (P1448)? (That is what we used for the Swedish parliamentarian documents.) Ainali (talk) 08:41, 11 January 2024 (UTC)[reply]
    Yes, thanks for the suggestion. So9q (talk) 08:49, 7 October 2024 (UTC)[reply]
  • @So9q: FYI, I created some data modeling for EU legal acts here. The EUR-Lex metadata is available through a SPARQL end point which gives us some additional data compared to scraping. –Samoasambia 18:38, 9 March 2024 (UTC)[reply]
    Oh, I was not aware of the WikiProject. Looks very nice and title is suggested there like Ainali did above. I'm not sure the SPARQL endpoint is needed nor desired for this task. I had a look back when I wrote this request and ditched it. Can't remember why, but this code works and is reasonably fast :) So9q (talk) 08:53, 7 October 2024 (UTC)[reply]
  • @Samoasambia, Ainali, Framawiki: I updated the code to use title. I also fixed a small bug which caused duplicate references when the script was rerunning. I also added editgroups so anyone can later undo the changes in bulk easily if needed. I'm ready to run it on all ~4000 items with CELEX id now.--So9q (talk) 21:32, 8 October 2024 (UTC)[reply]
    Are there some test edits with the updated code? Ainali (talk) 21:41, 8 October 2024 (UTC)[reply]
    I'm planning to add data to EU legal acts and to create new items via the EUR-Lex SPARQL endpoint but scraping the titles is fine for me. Makes my life a bit easier :). I'd still add stated in (P248) = EUR-Lex (Q1276282) to the references but otherwise looks great to me. Samoasambia 22:13, 8 October 2024 (UTC)[reply]
    Fixed, see Test edit.
    Note: no reference is added to existing title-statements (this is to avoid duplicate references with different dates on consecutive runs of the script).
    The script is idempotent. It only adds missing title-statements, never remove or change existing statements.
    I added editgroups so a complete run of the script can be rolled back easily.--So9q (talk) 09:10, 18 October 2024 (UTC)[reply]
    I added extraction of "EUID" e.g. "(EU) 1979/110" from en descriptions in WD and add them as mul aliases. They make it easier to lookup laws in Wikidata using the search bar and are used as IDs by e.g. the swedish government. See test edit. So9q (talk) 12:16, 18 October 2024 (UTC)[reply]
    Looks good to me, So9q. However, there are some issues with the "EUID". The initialisms in the identifier stand for the legal domain under which the act was passed (European Union, European Economic Community, European Atomic Energy Community etc.). The current naming format of legal acts has been in use only since January 2015, so for example "(EU) 1979/110" is not correct, it should be "79/110/EEC" (in English, different in others). Since the Lisbon treaty most new acts have legal domain "EU" but some also have "EU, Euratom" or "CFSP". The legal domain appreviations are language-specific, so while in English it's "EU", in French it's "UE" and in Irish "AE" etc. I added a table of all of them here. More information can be found at the Interinstitutional Style Guide.
    So I would recommend that the bot shouldn't add "EUIDs" with the legal domains to mul aliases because the format depends on language. However, adding only the year-and-number-part (e.g. "79/110", "2016/679") is fine and I support that. I have started working on a python code that would extract short labels for legal acts from the full titles in different languages using regex. Maybe we could work on that together if I add the code to GitHub? Samoasambia 19:38, 18 October 2024 (UTC)[reply]
    Oh, I was not aware that the EUID had a component that differs along both language and legal domains. Thanks for the table. I can use that to translate the legal domain part before adding the alias.
    This is becoming increasingly complicated. EU is so complicated :sweat smile:
    I digged a little and found a use of the "EUID" without the parenthesis "EU 2023/138" from a Swedish government agency.
    So now we have 5 different EUIDs used by governmental workers to refer to the same law:
    • long EUID with parens e.g. "(EU) 2023/138"
    • long EUID without parens e.g. "EU 2023/138"
    • short EUID without the legal domain e.g. "2023/138"
    • ELI IDs (we are missing a property, see Wikidata:Property proposal/European Legislation Identifier) (used in EUR-Lex, but not by e.g. the Swedish government)
    • CELEX ids (used in EUR-Lex and Cellar, but not by e.g. the Swedish government)
    So9q (talk) 12:24, 19 October 2024 (UTC)[reply]
    I added support for localized EUIDs according to the table provided by @Samoasambia and only add the "short EUID" to mul. I did not add support for Euratom and CFSP for now (I set the script to raise an exception if the EUID cannot be extracted and will implement it if needed when the script fails). See test edit
    Also added support for extracting and adding the localized "EECID" e.g. "80/1177/EEC" to aliases, see test edit
    @Ainali, @Samoasambia WDYT? :) --So9q (talk) 16:53, 19 October 2024 (UTC)[reply]
    Do we really need to add the same alias in multiple languages? If it exists in one language, it shows up in the search independent of what language one is using. Is there some added value for this that I am not seeing? Ainali (talk) 18:26, 19 October 2024 (UTC)[reply]
    It is the most light way we have, so yes it is necessary, if we would add all the variants to mul as alias instead we would loose information. They are valid for each of the languages and deduplicated in the database so nothing to worry about IMO. So9q (talk) 07:59, 20 October 2024 (UTC)[reply]
    I have a still a couple issues left. Firstly I think we shouldn't use the full titles as labels, instead we should be using some sort of short titles. Unfortunately they are not directly available on EUR-Lex but I did some regex magic for extracting them out of the full titles in all official languages. You can find it here. Currently it works in 22 out of 24 languages and for nearly all acts published since 1 January 2015. Adjusting it for earlier acts needs still some extra work. The second issue is that I don't think the "long EUID without parens" (e.g. EU 1980/1177) is anything official, so I wouldn't include that. EUR-Lex seems to use only the version with parens, and that is what the interinstitunional style guide says [4][5]. And finally I would put stated in (P248) before the URL in the references since it looks a bit nicer that way :). Otherwise looks good to me! Samoasambia 22:20, 28 October 2024 (UTC)[reply]
    I agree, short labels are nicer, thanks for working on that!
    I suggest we use the shortest. I know that "long EUID without parens" is not official, but helps people who try to do entity recognition in case it is used in the wild so I still would like to add it as alias.
    Since your code does not work for all languages, how do you suggest we proceed? Should we proceed with what is currently working and add long labels for the ones where it does not? Or should we fix this first before proceeding?
    Could you detail how it fails so we can fix it?
    Is there a bug in the re-module regarding IGNORECASE, do you have a link to a bugreport in that case? So9q (talk) 09:37, 27 November 2024 (UTC)[reply]
    @Samoasambia I added your logic to the Title class and added some tests too. It currently only seems to fail for greek. What other language doesn't work as expected?
    Would you be willing to provide a regex for greek that workaround the ignorecase bug? So9q (talk) 19:05, 27 November 2024 (UTC)[reply]
  • @Ymblanter: ready for approval?--So9q (talk) 21:34, 25 October 2024 (UTC)[reply]
    I will wait for a few days to see whether there are objections. Ymblanter (talk) 19:34, 26 October 2024 (UTC)[reply]
  • @Samoasambia, So9q, Ymblanter: What is the situation here now? --Wüstenspringmaus talk 08:51, 15 February 2025 (UTC)[reply]
    I'm on wikibreak right now. The code is ready if the community does not have any additional objections.
    Short title still doesn't work for greek, but I don't know how to solve that. I'm thinking it can be solved once the regex bug has been fixed or by anyone with the required knowledge of regex workarounds or greek or both. So9q (talk) 10:12, 19 February 2025 (UTC)[reply]


RudolfoBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: RudolfoMD (talkcontribslogs)

Task/s:importing list of Drugs With Black Box Warnings; setting Property / legal status (medicine): boxed warning.

Code: N/A

Function details: Continue importing FDA list of Drugs With Black Box Warnings, as I've been doing, with OpenRefine. Ideally hope to create or have someone run a bot to maintain the data.

OpenRefine urges me to submit  Large edit batches for review.  I've done ~400 in batches of ~200.  
I want to do more, like https://www.wikidata.org/w/index.php?title=Q7939256&diff=prev&oldid=2019984699&diffmode=source.
This is what's set:
Property / legal status (medicine): boxed warning / rank
Property / legal status (medicine): boxed warning / reference
 reference URL: https://nctr-crs.fda.gov/fdalabel/ui/spl-summaries/criteria/343802
 title: FDA-sourced list of all drugs with black box warnings (Use Download Full Results and View Query links. (English)

Want to match more widely - on Q113145171, which has ~500 matches, and the other types which match and are drugs of some kind listed below.
Table has ~1600 rows, and the bulk have a matching drug in wikidata already.

Types: 
Q113145171 type of chemical entity (658)
Q59199015 group of sterioisomers (51)
Q12140 medication	DONE- first extract, I think (need to redo to add cites)
Q169336 mixture (45)
Q79529 chemical substance (40)
Q1779868 combo drug (28)
Q35456 essential med (13)
Q119892838 type of mixture of chem (3)
Q28885102 pharm prod (3)
Q467717 racemate (3)
Q8054 protein (biomolecule) (4)
Q422248 mab (12)
Q679692 biopharmaceutical (6)
Q213901 gene therapy (4)
Q2432100 vet drug (3)

I do not want to do for types 
Q13442814 article (NO)
Q30612 clinical trial (NO)
Q7318358 review article (NO)
Q16521 taxon (NO?) 

--RudolfoMD (talk) 09:29, 29 November 2023 (UTC)[reply]

  •   Comment Looks useful! Can we see some test edits with the actual bot code to be used?

GamerProfilesBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Parnswir (talkcontribslogs)

Task/s: Backfill GamerProfiles game IDs (P12001)

Code: https://github.com/GP-9000/GamerProfilesBot

Function details: The bot will regularly update existing video games with the GamerProfiles game ID (P12001) sourced from https://gamerprofiles.com. We plan to update the initial batch of around 55,000 games within a month of approval and then switch to a more relaxed (on-demand) update process.

--Parnswir (talk) 11:05, 5 October 2023 (UTC)[reply]

@Parnswir: Is Master Jaro (talkcontribslogs) also your account (uses "we", see Special:Diff/1960163586, Special:Diff/1968406273) or is it another employee? If so, he/she should also disclose the paid editing. Regards Kirilloparma (talk) 06:32, 10 November 2023 (UTC)[reply]
@Kirilloparma @Lymantria Thank you for the info everyone! I didn't know about the "paid contributions" info before. And yes, I am a different person :) Since high-quality edits are also in the interest of the company, I have added the paid contributions template to my page as well now. Just let me know if anything else is missing. I've learned quite a bit over the last months, and will keep doing my best to produce helpful edits. Master Jaro (talk) 15:33, 10 November 2023 (UTC)[reply]
Please make 50 test edits and link them here. So9q (talk) 10:38, 2 January 2024 (UTC)[reply]
The contributions were already made on October 5th 2023: https://m.wikidata.org/wiki/Special:Contributions/GamerProfilesBot Parnswir (talk) 16:40, 2 January 2024 (UTC)[reply]
@Kirilloparma @Jean-Frédéric @BrokenSegue @Lymantria @So9q Thank you for your efforts everyone! Is there anything more we can do to help move this project forward? We would love to add more of the relevant IDs next to the other game edits we make along the way. Any help is highly appreciated :) Master Jaro (talk) 16:35, 27 March 2024 (UTC)[reply]

WingUCTBOT (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Tadiwa Magwenzi (talkcontribslogs)

Task/s: Batch Upload of Niger-Congo B Lexemes , including Senses and Forms.

Code:https://github.com/Boomcarti/WingUCTBOT

Function details: Upload of 550 isiZulu Nouns as Lexemes, Including their associated Forms and Senses. --WingUCTBOT (talk) 10:07, 31 July 2023 (UTC)[reply]

Please make some test edits. Ymblanter (talk) 19:19, 7 August 2023 (UTC)[reply]
Greetings! I hope you are well. I have performed 200 Test edits, as see on the Test Wiki data site, awaiting approval to split the 500 isiZulu Nouns into Batches and then to Upload them. WingUCTBOT (talk) 23:14, 15 August 2023 (UTC)[reply]
I am sorry but could you please provide a link to the test edits on Testwiki. Ymblanter (talk) 18:17, 7 September 2023 (UTC)[reply]
I've just redone about 250 test edits they are on the TestWikidata recent changes page. Some examples: https://test.wikidata.org/wiki/Lexeme:L3768 , https://test.wikidata.org/wiki/Lexeme:L3753 . The link to the page: Recent changes - Wikidata . WingUCTBOT (talk) 18:14, 9 September 2023 (UTC)[reply]
I took a quick look at the code. Are you aware of the python library WikibaseIntegrator which supports lexemes?
I prefer if you would use that or a similar library to make sure you honor the max edit thing on the servers.
Would you be willing to do that? So9q (talk) 10:50, 2 January 2024 (UTC)[reply]


The Lexemes were sourced manually by Professor M.Keet and Langa Khumalo.

https://github.com/mkeet/GENIproject/tree/master/isiZulupluraliser/isiZulu

@WingUCTBOT, Tadiwa Magwenzi: Your code appears to add the same sense multiple times and, among forms, adds the plural of a noun multiple times without including a form for the singular. (You may wish to consider using tfsl for your import; once it is installed, an overview of how it is used may be found here.) Mahir256 (talk) 00:05, 16 August 2023 (UTC)[reply]
Understood, will fix it now. WingUCTBOT (talk) 17:21, 16 August 2023 (UTC)[reply]
Good evening. I have addressed your concerns with the code and have uploaded a test batch of 50+ Lexemes( isiZulu Nouns, along with their senses and forms) WingUCTBOT (talk) 22:36, 16 August 2023 (UTC)[reply]
In time, i do intend to refactor the code to use tfsl WingUCTBOT (talk) 23:09, 16 August 2023 (UTC)[reply]
@WingUCTBOT, Tadiwa Magwenzi: What is the situation here? Wüstenspringmaus talk 14:54, 15 March 2025 (UTC)[reply]

MajavahBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Taavi (talkcontribslogs)

Task/s: Import version and metadata information for Python libraries from PyPI.

Code: https://gitlab.wikimedia.org/toolforge-repos/majavah-bot-wikidata/-/blob/main/majavah_wd_bot/pypi_sync/main.py

Function details: For items with PyPI project (P5568) set, imports the following data from PyPI:

Additionally the PyPI project (P5568) value will be updated to the normalized name if it's not already in that form.

Taavi (talk) 19:54, 11 July 2023 (UTC)[reply]

how many statements do you think this will add? don't some packages have...lots of versions? BrokenSegue (talk) 20:05, 11 July 2023 (UTC)[reply]
Good point. There are about 200k releases it could import (for about 2k packages total, so about 90 per package on average). Taking an approach similar to github-wiki-bot and only importing that could bring it down to 75k for the last 100 (33 per package on average) or 50k for the last 50 (22 pep package on average). Taavi (talk) 20:50, 11 July 2023 (UTC)[reply]
i don't suppose major releases only is an option? BrokenSegue (talk) 20:54, 11 July 2023 (UTC)[reply]
I don't think there's a consistent enough definition for that. For example Home Assistant (Q28957018) now does year.month.patch type releases so the first digit changing isn't really meaningful.
However I can filter out all packages generated from https://github.com/vemel/mypy_boto3_builder, as those are all very similar and not intended for human use directly anywyays. That cuts the total number of versions to a third (~70k) even before doing any other per-package limits. Taavi (talk) 21:15, 11 July 2023 (UTC)[reply]
See also Wikidata:Requests for permissions/Bot/RPI2026F1Bot 5 for discussion of a previous similar task (seems not active) and Github-wiki-bot imports version data from GitHub (see e.g. history of modelscope (Q120550399)); however you should care that version numbers may be different between GitHub and PyPI.--GZWDer (talk) 11:38, 12 July 2023 (UTC)[reply]
──────────────────────────────────────────────────────────────────────────────────────────────────── Oh yes, the RPI2026F1Bot task looks somewhat similar. I'm aware of Github-wiki-bot, but there are quite a few PyPI projects that are not hosted on GitHub, and I think my code should be able to handle items with data from both and ensure the two bots don't start edit warring for example. Taavi (talk) 17:23, 12 July 2023 (UTC)[reply]
@Taavi: Please make some test edits. --Wüstenspringmaus talk 11:05, 29 August 2024 (UTC)[reply]


FromCrossrefBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Carlinmack (talkcontribslogs)

Task/s: Using information from Crossref:

  1. Add publication date to items where they are not present in Wikidata
  2. Fix publication dates where they are erroneous

Code: Will be using Pywikibot in a similar way as I have done previously with this bot

Function details: Previously this bot has been used to add CC licenses to items which has been successful. In March 2022 it was realised that other bots/tools were using the wrong date for publication date in Crossref. Since I am working with this dump, I will step up to try fix this issue.

A simpler task is to fill in the details for items without publications. I've created a set of 80k items and once given the go ahead I will contribute these dates.

The issue of the wrong dates is a little more complicated as there are some false positives on both sides of this, sometimes Crossref is wrong and sometimes Wikidata is wrong. I'm sure that Wikidata is wrong more often, however before doing any edits I will do some manual validation to check the prevalence of false positives. When I am fairly confident I will start editing and I'll see whether I can deprecate the existing statement, add a reason and add the new date as preferred. If not, due to limitations in Pywikibot, I'll remove the previous statement instead. --Carlinmack (talk) 14:31, 7 July 2023 (UTC)[reply]

  •   Support This seems useful. However I see only one example edit for this so far, maybe you should do some more just to verify it's doing what we expect? You will be using the "published" date-parts data in the Crossref json files for this? If an item already has the correct published date value will you add the reference? Maybe that should only be done if the published date doesn't already have a reference though... ArthurPSmith (talk) 18:17, 24 July 2023 (UTC)[reply]
Pls make some test edits.--Ymblanter (talk) 15:53, 9 August 2023 (UTC)[reply]
@User:Carlinmack: What about "erroneous" in Crossref and corrected in WD? --Succu (talk) 20:19, 7 November 2023 (UTC)[reply]
@Succu, Carlinmack: What is the situation here? Are you still interested in an approval? Wüstenspringmaus talk 15:00, 15 March 2025 (UTC)[reply]

ACMIsyncbot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Pxxlhxslxn (talkcontribslogs)

Task/s: Sync links with ACMI API.

Code: https://github.com/ACMILabs/acmi-wikidata-bot/blob/main/acmi_bot.py

Function details: As part of an upcoming residency with the ACMI (Q4823962) I have written a small bot to pull Wikidata links from their public API and write back to Wikidata to ensure sync between the two resources.The plan was to integrate this as part of the build workflow for the ACMI API (https://github.com/ACMILabs/acmi-api). This is currently set to append only, not removing any links Wikidata-side. While the initial link count is only around 1500 there will likely be significant expansion in the current weeks as we identify further overlaps. --Pxxlhxslxn (talk) 00:36, 16 May 2023 (UTC)[reply]

can you add a reference? can you set an edit summary (just add a "summary" arg to the write call)? Otherwise looks good. BrokenSegue (talk) 01:23, 16 May 2023 (UTC)[reply]
Oh dear, I have tried to change the bot name and now I see I have screwed things up a bit in relation to this form (ie the discussion is still under the old name). Should I just open a new request? I have also added the edit summary to the write function. Pxxlhxslxn (talk) 10:48, 16 May 2023 (UTC)[reply]
No need to open a new request as far as I am concerned. Ymblanter (talk) 19:06, 17 May 2023 (UTC)[reply]
We have now finished the test sample group for the bot and it us working as expected, are there any other requirements or impediments to being added to the "bot" group? I also had a question about something we have encountered: code and credentials work fine when run alone as a standalone python process, but when integrated as a github action (triggered by the ACMI API build) there is a "wikibaseintegrator.wbi_exceptions.MWApiError: 'You do not have the permissions needed to carry out this action.'" error message. Has anyone ever encountered this issue before? The only factor I can think of is maybe some kind of IP block. --Pxxlhxslxn (talk) 11:52, 2 June 2023 (UTC)[reply]
I don't think it's an IP block. BrokenSegue (talk) 20:40, 22 June 2023 (UTC)[reply]
@Pxxlhxslxn: Are you still interested in an approval? Wüstenspringmaus talk 14:58, 15 March 2025 (UTC)[reply]

WikiRankBot

edit

WikiRankBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)

Operator: Danielyepezgarces (talkcontribslogs)

Task/s: Use Alexa rank (P1661)

Code: Coming soon i publish the code

Function details: I am making a bot that can track the monthly ranking of websites based on Similarweb Ranking. The bot will receive a list of websites with their corresponding Wikidata IDs and domains to keep the data accurate.

The bot will have to use the Similarweb Top Sites API to get the traffic ranking of each website and store it in a MySQL database along with the date of the ranking. If the website already exists in the database, the bot should update its ranking and date every time there is a new ranking update.

Soon the bot will include some new features that will be communicated in the future.

The Similarweb ranking is not this property. It is Similarweb ranking (P10768).--GZWDer (talk) 05:16, 12 May 2023 (UTC)[reply]
If correct the bot uses the property P10768 and rewrites the old property P1661 since the public data of Alexa Rank ceased to exist,
when I put Similarweb Ranking I don't mean the property P10768 but that the bot took the data from similarweb.com website Danielyepezgarces (talk) 16:15, 17 May 2023 (UTC)[reply]
what edits is this bot making? BrokenSegue (talk) 15:59, 22 February 2024 (UTC)[reply]

ForgesBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Dachary (talkcontribslogs)

Task/s: Add licensing information to software forges entries in accordance to what is found in the corresponding Wikipedia page. It is used as a helper in the context of the Forges project

Code: https://lab.forgefriends.org/friendlyforgeformat/f3-wikidata-bot/

Function details: ForgesBot is a CLI tool designed to be used by participants in the Forges project in two steps. First it is run to do some sanity check, such as verifying forges are associated with a license. If some information is missing, the participant can manually add it or it can use ForgesBot to do so.

The implementation includes one plugin for each task. There is currently only one plugin to verify and edit the license information. The license is deduced by querying the wikipedia pages of each software: if they consistently mention the same license the edit can be done. If there are discrepancies they are reported and no action is done.

--Dachary (talk) 09:29, 26 April 2023 (UTC)[reply]

I don't think I understand the task. Can you do some (~30) test edits? Or try to explain again? BrokenSegue (talk) 17:13, 26 April 2023 (UTC)[reply]

LucaDrBiondi@Biondibot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: LucaDrBiondi (talkcontribslogs)

Task/s: Import us patent from a csv file

For example:

US11387028; Unitary magnet having recessed shapes for forming part of contact areas between adjacent magnets ;Patent number: 11387028;Type: Grant ;Filed: Jan 18, 2019;Date of Patent: Jul 12, 2022;Patent Publication Number: 20210218300;Assignee Whylot SAS (Cambes) Inventors: Romain Ravaud (Labastide-Murat), Loic Mayeur (Saint Santin), Vasile Mihaila (Figeac) ;Primary Examiner: Mohamad A Musleh;Application Number: 16/769,182

US11387027; Radial magnetic circuit assembly device and radial magnetic circuit assembly method ;Patent number: 11387027;Type: Grant ;Filed: Dec 5, 2017;Date of Patent: Jul 12, 2022;Patent Publication Number: 20200075208;Assignee SHENZHEN GRANDSUN ELECTRONIC CO., LTD. (Shenzhen) Inventors: Mickael Bernard Andre Lefebvre (Shenzhen), Gang Xie (Shenzhen), Haiquan Wu (Shenzhen), Weiyong Gong (Shenzhen), Ruiwen Shi (Shenzhen) ;Primary Examiner: Angelica M McKinney;Application Number: 16/491,313

US11387026; Assembly comprising a cylindrical structure supported by a support structure ;Patent number: 11387026;Type: Grant ;Filed: Nov 21, 2018;Date of Patent: Jul 12, 2022;Patent Publication Number: 20210183551;Assignee Siemens Healthcare Limited (Chamberley) Inventors: William James Bickell (Witney), Ashley Fulham (Hinkley), Martin Gambling (Rugby), Martin Howard Hempstead (Ducklington), Graeme Hyson (Milton Keynes), Paul Lewis (Witney), Nicholas Mann (Compton), Michael Simpkins (High Wycombe) ;Primary Examiner: Alexander Talpalatski;Application Number: 16/771,560


Code:

I would learn to write my bot to perform this operation. I am using Curl in c language, i have a bot account (that now i want to "request for permission") buy i get the following error message:

{"login":{"result":"Failed","reason":"Unable to continue login. Your session most likely timed out."}} {"error":{"code":"missingparam","info":"The \"token\" parameter must be set.","*":"See https://www.wikidata.org/w/api.php for API usage.

probably i think my bot account is not already approved...

Function details:

Import item on wikidata starting from title and description and these properties for now:

P31 (instance of) "United States patent" P17 (country) "united states" P1246 (patent number) "link to google patents or similar" --LucaDrBiondi (talk) 18:25, 28 February 2023 (UTC)[reply]

@LucaDrBiondi How many patents are you planning to add this way? ChristianKl12:33, 17 March 2023 (UTC)[reply]
The bot account to which you link doesn't exist. ChristianKl12:34, 17 March 2023 (UTC)[reply]


Hi i am still writing and trying it and moreover it is not yet a bot ...because it is not automatic.

I have imported patents data into a sql server database then i read a patent and with pywikibot i try for example to search the assignee (owned by property). If i not find a match i will search manually. only if i am sure then i insert the data into wikidata. this is because i do not want to add data with errors. For example look at Q117193724 item. LucaDrBiondi (talk) 18:27, 17 March 2023 (UTC)[reply]




@ChristianKl
At the end i have developed a bot using pywikibot.
It is not fully automatic because i have the property Owned_id that it is mandatory for me.
So i verify if wikidata has already an item to use for this property.
If I not find it then i not import the item (the patent)
I have already loaded some houndred items like for example this Q117349404
Do a limit of number of item that can i import each day exists?
I have received at a point a warning message from the API
Must i so somethink with my user bot?
thank you for your help! LucaDrBiondi (talk) 16:08, 31 March 2023 (UTC)[reply]


Cewbot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Kanashimi (talkcontribslogs)

Task/s: Add sitelink to redirect (Q70893996) for sitelinks to redirects without intentional sitelink to redirect (Q70894304).

Code: github

Function details: Find redirects in wiki projects, and check if there is sitelink to redirect (Q70893996) / intentional sitelink to redirect (Q70894304) or not. Add sitelink to redirect (Q70893996) for sitelinks without sitelink to redirect (Q70893996) or intentional sitelink to redirect (Q70894304). Also see Wikidata:Sitelinks to redirects. --Kanashimi (talk) 02:19, 15 November 2022 (UTC)[reply]

sounds good. link to the source? BrokenSegue (talk) 05:28, 15 November 2022 (UTC)[reply]
I haven't started writing code yet. I found that there is already another task Wikidata:Requests for permissions/Bot/MsynBot 10 running. What if I treat this task as a backup task? Or is this not actually necessary? Kanashimi (talk) 03:34, 21 November 2022 (UTC)[reply]
The complete source code of my bot is here: https://github.com/MisterSynergy/redirect_sitelink_badges. It is a bit of a work-in-progress since I need to address all sorts of special situations that my bot comes across during the inital backlog processing.
You can of course come up with something similar, but after the initial backlog has been cleared, there is actually not that much work left to do. Give how complex this task turned out to be, I am not sure whether it is worth to make a complete separate implementation for this task. Yet, it's your choice.
Anyways, my bot would not be affected by the presence of another one in a similar field of work. —MisterSynergy (talk) 18:55, 21 November 2022 (UTC)[reply]

  Support Just another implementation of an approved task, why don't trust this one? Midleading (talk) 15:42, 4 November 2024 (UTC)[reply]

@Kanashimi: What is the situation here? Are you still interested in an approval? --Wüstenspringmaus talk 08:45, 15 February 2025 (UTC)[reply]
I may have to wait until I have time to restart this quest. Kanashimi (talk) 12:52, 15 February 2025 (UTC)[reply]


YSObot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: YSObot (talkcontribslogs)

Task/s: Account for mapping Wikidata with General Finnish Ontology (Q27303896) and the YSO-places ontology by adding YSO ID (P2347) and for creating new corresponding concepts in case there are no matches.

Code: n/a. Uploads will be done mainly with Openrefine, Mix'n'Match and crresopoinding tools.

Function details: YSO includes over 40.000 concepts and about half of them are already maapped. The mapping includes

Matches are checked manually before upload. Double-checking is controlled afterwords by using the Constraint violations report

Flag/s: High-volume editing, Edit existing pages, Create, edit, and move pages

--YSObot (talk) 11:33, 16 December 2021 (UTC)[reply]

  • The bot was running without approval (this page was never included). I asked the operator to first get it approved. Can you please explain the creation of museum building (Q113965327) & theatre building (Q113965328) and similar duplicate items? Multichill (talk) 16:27, 15 September 2022 (UTC)[reply]
    museo (Q113965327) & teatteri (Q113965328) are part of the Finnish National Land Survey classification for places. These classes will be mapped with existing items if they are exact matches by using Property:P2959.
    Considering duplicate YSO-ID instances: these are most often due to modeling differences between Wikdata and YSO. Some concepts are split in the other one and vice versa. These are due to linguistic and cultural differences in vocabularies and concept formation. Currently the duplicates would be added to the exceptions list in the YSO-ID property P2347. However, lifting the single value constraint for this proerty is another options here.
    Anyway, YSObot is currently an important tool in efforts to complete the mappings of the 30.000+ conepts of YSO with Wikidata. Uploads of YSO-IDs are made to reconciled items from OpenRefine. See YSO-Wikidata mapping project and the log of YSObot. For the moment, uploads are done usually only to 10-500 items at time few times per day max. Saarik (talk) 13:46, 23 September 2022 (UTC)[reply]
    That's not really how Wikidata works. All your new creations look like duplicates of existing items so shouldn't have been created. Your proposed usage of {{P|P2959} is incorrect. With the current explanation I   Oppose this bot. You should first clean up all these duplicates before doing any more edits with this bot. @Susannaanas: care to comment on this? Multichill (talk) 09:58, 24 September 2022 (UTC)[reply]
    This bot is very important, we just need to reach common understanding about how to model the specific Finnish National Land Survey concepts. I have myself struggled with them previously. There is no need to oppose to the bot itself. – Susanna Ånäs (Susannaanas) (talk) 18:02, 25 September 2022 (UTC)[reply]
    why do we want to maintain permanently duplicated items? this seems like a bad outcome. why not instead make these subclasses of the things they are duplicates of. or attach the identifier to already existing items. BrokenSegue (talk) 20:36, 11 October 2022 (UTC)[reply]
    I think this discussion went a little astray from the original purpose of YSObot.
    The creation of the Finnish National Land Survey place types were erroneously made with the YSObot account although they are not related to YSO at all. I was adding them manually with Openrefine but forgot to change the user ids in my Openrefine! I though that that would not be a big issue. The comments by @Multichilland @BrokenSegue are not really related to the original use of YSObot and do not belong here at all but rather to Q106589826 Talk page.
    About the duplicate question - Earliear, I did exactly that and added these to already existing items with "instance of" property. THe I received feedback and was told to create separate items for the types. So now I am getting to totally opposite instructions from you guys. Lets move this discussion to its proper place.
    And please, add the correct rights for this bot account, if they are still missing as we still need to add the remaining 10.000+ identifiers. Saarik (talk) 11:32, 27 October 2022 (UTC)[reply]
  •   Oppose as per above. If you refrain from creating new items I would probably support it if I could easily see the flow of logic.
  • I strongly encourage you to publish an actvity planuml diagram showing he logic of the matching.
  • Thanks in advance. So9q (talk) 10:26, 2 January 2024 (UTC)[reply]

PodcastBot (talkcontribsnew itemsnew lexemesSULBlock logUser rights logUser rightsxtools)
Operator: Germartin1 (talkcontribslogs)

Task/s: Upload new podcast episodes, extract: title, part of the series, has quality (explicit episode), full work available at (mp3), production code, apple podcast episode id, spotify episode ID. Regex extraction: talk show guest, recording date (from description) It will be manually run and only for prior selected podcasts. Code: https://github.com/mshd/wikidata-to-podcast-xml/blob/main/src/import/wikidataCreate.ts

Function details:

  • Read XML Feed
  • Read Apple podcast feed/ and spotify
  • Get latest episode date available on Wikidata
  • Loop all new episodes which do not exists in Wikidata yet
  • Extract data
  • Import to Wikidata using maxlath/wikidata-edit

--Germartin1 (talk) 04:38, 25 February 2022 (UTC)[reply]

How about episodes to podcasts with a Wikipedia article? @Ainali:--Trade (talk) 18:34, 12 June 2022 (UTC)[reply]