Open main menu
Bot requests
If you have a bot request, add a new section using the button and tell exactly what you want. To reduce the process time, first discuss the legitimacy of your request with the community in the Project chat or in the Wikiprojects's talk page. Please refer to previous discussions justifying the task in your request.

For botflag requests, see Wikidata:Requests for permissions.

Tools available to all users which can be used to accomplish the work without the need for a bot:

  1. PetScan for creating items from Wikimedia pages and/or adding same statements to items
  2. QuickStatements for creating items and/or adding different statements to items
  3. Harvest Templates for importing statements from Wikimedia projects
  4. Descriptioner for adding descriptions to many items
  5. OpenRefine to import any type of data from tabular sources
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2019/07.
Filing cabinet icon.svg
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 2 days.

You may find these related resources helpful:

High-contrast-document-save.svg Dataset Imports    High-contrast-view-refresh.svg Why import data into Wikidata.    Light-Bulb by Till Teenck.svg Learn how to import data    Noun project 1248.svg Bot requests    Question Noun project 2185.svg Ask a data import question

Contents

Redirects after archivalEdit

Request date: 11 September 2017, by: Jsamwrites (talkcontribslogs)

Link to discussions justifying the request
Task description

Retain the links to the original discussion section on the discussion pages, even after archival by allowing redirection.

Licence of data to import (if relevant)
Discussion


Request process

Semi-automated import of information from Commons categories containing a "Category definition: Object" templateEdit

Request date: 5 February 2018, by: Rama (talkcontribslogs)

Link to discussions justifying the request
Task description

Commons categories about one specific object (such as a work of art, archaeological item, etc.) can be described with a "Category definition: Object" template [1]. This information is essentially a duplicate of what is or should be on Wikidata.

To prove this point, I have drafted a "User:Rama/Catdef" template that uses Lua to import all relevant information from Wikidata and reproduces all the features of "Category definition: Object", while requiring only the Q-Number as parameter (see Category:The_Seated_Scribe for instance). This template has the advantage of requesting Wikidata labels to render the information, and is thus much more multi-lingual than the hand-labeled version (try fr, de, ja, etc.).

I am now proposing to deploy another script to do the same thing the other way round: import data from the Commons templates into relevant fields of Wikidata. Since the variety of ways a human can label or mislabel information in a template such as "Category definition: Object", I think that the script should be a helper tool to import data: it is to be ran on one category at a time, with a human checking the result, and correcting and completing the Wikidata entry as required. For now, I have been testing and refining my script over subcategories of [2] Category:Ship models in the Musée national de la Marine. You can see the result in the first 25 categories or so, and the corresponding Wikidata entries.

The tool is presently in the form of a Python script with a simple command-line interface:

./read_commons_template.py Category:Scale_model_of_Corse-MnM_29_MG_78 reads the information from Commons, parses it, renders the various fields in the console for debugging purposes, and creates the required Wikibase objects (e.g: text field for inventory numbers, Q-Items for artists and collections, WbQuantity for dimensions, WbTime for dates, etc.)
./read_commons_template.py Category:Scale_model_of_Corse-MnM_29_MG_78 --commit does all of the above, creates a new Q-Item on Wikidata, and commits all the information in relevant fields.

Ideally, when all the desired features will be implemented and tested, this script might be useful as a tool where one could enter the

Licence of data to import (if relevant)

The information is already on Wikimedia Commons and is common public knowledge.

Discussion


Request process

Crossref JournalsEdit

Request date: 27 March 2018, by: Mahdimoqri (talkcontribslogs)

Link to discussions justifying the request
Task description
  • Add missing journals from Crossref
Licence of data to import (if relevant)
Discussion


Request process

elevation above sea level (P2044) values imported from ceb-WikiEdit

Request date: 6 September 2018, by: Ahoerstemeier (talkcontribslogs)

Link to discussions justifying the request
  • Many items have their elevation imported from the Cebuano-Wikipedia. However, the way the bot created the values is very faulty, especially due to inaccurate coordinates the value can differ by up to 500m! Thus most of the values are utter nonsense, some are a rough approximation, but certainly not good data. To make things worse - the qualifier with imported from Wikimedia project (P143) often wasn't added. For an extreme example see Knittelkar Spitze (Q1777201).
Task description

Firstly, a bot has to add all the missing imported from Wikimedia project (P143) omitted in the original infobox harvesting. Secondly, especially for mountains and hills, the value has to be set to deprecated state, to avoid it to poison our good date.

Licence of data to import (if relevant)
Discussion


Request process

BLKÖEdit

Most pages in https://de.wikisource.org/wiki/Kategorie:BLK%C3%96 (27209 pages) seem to lack items (http://petscan.wmflabs.org/?psid=6382466 , currently 26641 pages).

I think it would be worth creating them as well as an item for the person subject of the article if it can't be matched with one of the exisiting items. --- Jura 07:43, 8 November 2018 (UTC)

ProposalEdit

To get this started I propose this structure for articles. It also mentions from which source each statement is imported. As I see it besides the structure for articles the structure for volumes and person subjects with imported data also needs to be decided. Additionally described by source (P1343) should probably be added to new and existing person subjects. --Pyfisch (talk) 22:29, 11 December 2018 (UTC)

ArticleEdit


I've made a preliminary data export. It contains all BLKÖ articles with GND, Bearbeitungsstand etc. The articles are linked based on the stated GND, Wikipedia and Wikisource articles, if there was a conflict multiple Q-numbers are given. I also searched for items linked to the article and unfortuanly found many that describe the person instead the of the text (they will need to be split). The last four columns state the date/place of birth/death from the text. The dates vary in accuracy:
  • year-month-day, year-month, only year
  • ~ before date describes imprecise dates
  • > before describes dates stated as "nach 1804"
  • A before dates describes "Anfang/erste Tage" start of
  • E before dates describes "Ende/letzte Tage" end of
  • M before dates describes "Mitte" middle of
  • ? BLKÖ knows the person was dead but does not know when he/she died

The places will need to be manually matched to Q-items. The first column contains some metadata about the kind of page. There are:

  • empty: Person
  • L: Liste
  • F: Family, Wappen, Genealogie
  • R: Cross Reference
  • P: Prelude
  • H: note about names and alternate spellings
  • N: corrections, Nachträge

Each group should get a distinct is-a property. @Jura1: Do you like it? This is just for viewing, a later version will be editable to make manual changes before the import. --Pyfisch (talk) 22:14, 18 December 2018 (UTC)

  • I like the approach. BTW, there is Help:Dates that attempts to summarize how to add incomplete dates. --- Jura 14:05, 20 December 2018 (UTC)
    • editable data export. Updated the exported data. The sheet "articles" is already cleaned up. But I need help to match the ~4000 place names in the sheet "places" to Wikidata Q-Items. --Pyfisch (talk) 16:07, 22 December 2018 (UTC)
  • @Pyfisch: thanks a lot for your proposal! Are there any plans to realize this? --M2k~dewiki (talk) 07:16, 10 July 2019 (UTC)
@M2k~dewiki: Yes, the data is already prepared for the import, but I have not gotten around to writing an import script, getting approval and running the script. --Pyfisch (talk) 09:07, 11 July 2019 (UTC)
    • You could do the upload with QuickStatements --- Jura 12:24, 19 July 2019 (UTC)

Clinical TrialsEdit

Request date: 8 November 2018, by: Mahdimoqri (talkcontribslogs)

Link to discussions justifying the request

https://www.wikidata.org/w/index.php?title=Wikidata:Dataset_Imports/Clinical_Trials*

Task description
Licence of data to import (if relevant)
Discussion


Request process

Cleanup VIAF datesEdit

Task description

There are a series of imports of dates that need to be fixed, please see Topic:Un0f1g1eylmopgqu and the discussions linked there, notably Wikidata:Project_chat/Archive/2018/10#Bad_birthdays with details on how VIAF formats them. --- Jura 05:28, 14 November 2018 (UTC)

Licence of data to import (if relevant)
Discussion
  • Is anyone interested in working on this problem? I think it's a real issue, but it needs attention from someone who can parse the VIAF records and that's certainly not me. - PKM (talk) 21:33, 16 March 2019 (UTC)
  • Yeah, it would be good. --- Jura 12:25, 19 July 2019 (UTC)

import writersEdit

When adding values for screenwriter (P58), I notice that frequently these persons don't have Wikidata items yet.

It would be helpful to identify a few sources for these and create corresponding items. Ideally every tv episode would have its writers included. --- Jura 15:05, 18 November 2018 (UTC)

It would be beneficial if informations like if the writer wrote just the teleplay or the story would be stated.--CENNOXX (talk) 07:19, 12 April 2019 (UTC)
  • At this stage, the idea is to simply create items for writers, not adding them to works. --- Jura 12:26, 19 July 2019 (UTC)

adding data from scoresway.comEdit

Request date: 22 November 2018, by: Amirh123 (talkcontribslogs) hi please adding player datas of scoresway.com

Link to discussions justifying the request
Task description
Licence of data to import (if relevant)
Discussion
@Amirh123: the license of this site doesn't seem to allow import. Cheers, VIGNERON (talk) 13:46, 12 February 2019 (UTC)

Import Schizosaccharomyces pombe protein coding genesEdit

Request date: 6 December 2018, by: Anlock (talkcontribslogs)

Link to discussions justifying the request
Task description

The PomBase database manually curate and maintain the coding inventory of the S. pombe genome. I would like to upload the protein coding genes of S. pombe as per this request https://www.wikidata.org/wiki/Wikidata:Property_proposal/PomBase_systematic_ID

The dataset is located here: https://docs.google.com/spreadsheets/d/1nrFcoQJirshUYbgI8-O3sjIDUonDHM_gLClJrrm3zZY/

Licence of data to import (if relevant)

Creative Commons Attribution 4.0 International license (CC-BY)

Discussion


Request process

Add original title of scientific articlesEdit

There are some articles, that have title (P1476) value enclosed in square bracket. This means that the title is translated to English and the article's title wasn't in English.

Sample: https://www.wikidata.org/w/index.php?title=Q27687073&oldid=555470366

Generally, the following should be done:

  1. deprecate existing P1476 statement
  2. add the original title with title (P1476)
  3. add the label in the original language
  4. remove [] from the English label

--- Jura 11:03, 11 December 2018 (UTC)

Research_Bot claims to do this under Maintenance Queries but I still see a lot of research papers with this issue. I might work on a script for this to try and figure out how to make a bot. Notme1560 (talk) 18:17, 21 March 2019 (UTC)
I have created a script for this task. source and permission request --Notme1560 (talk) 20:39, 23 March 2019 (UTC)

Reviews in articlesEdit

When doing checks on titles, I found some items with P31=scholarly article (Q13442814) include an ISBN in the title (P1476)-value.

Sample: Q28768784.

Ideally, these would have a statement main subject (P921) pointing to the item about the work. --- Jura 19:10, 13 December 2018 (UTC)

DiscussionEdit

@Jura1: I’ve been manually cleaning up a few of these. Some comments on the process from my perspective:

- PKM (talk) 01:33, 4 March 2019 (UTC)

    • Sure, it's possible to take this a step further. --- Jura 11:09, 10 March 2019 (UTC)

Patronage/clientèle patronage (P3872), rank-preferred for latest year availableEdit

Request date: 1 January 2019, by: Bouzinac (talkcontribslogs)

Link to discussions justifying the request
Task description

Update any element with P3872, if there is (1+) years, up (preferred)-rank the latest year (and only when year precision, not months, etc). Down (normal)-rank other years if present. For instance, see

And this should be executed one time per year (as there might be new data) Thanks a lot!

Licence of data to import (if relevant)
Discussion
Request process

Auto-adding complementary valuesEdit

Request date: 11 January 2019, by: Jc86035 (talkcontribslogs)

Link to discussions justifying the request
Task description

There should be a bot to add complementary values for Genius artist ID (P2373), Genius album ID (P6217) and Genius song ID (P6218). For all Genius artist ID (P2373) values without Genius artist numeric ID (P6351), the bot should add the first match of regex \{"name":"artist_id","values":["(\d+)" in the linked page, and vice versa with the first match of regex "slug":"([0-9A-Z][0-9a-z-]*[0-9a-z]|[0-9A-Z])". Preferred and deprecated ranks should be inferred when adding new values, although if multiple statements to be added have the same value but different rank then only the statement with the higher rank should be used. The values should be periodically checked to see if they match, and errors should be reported somewhere (probably on-wiki). The same should also be implemented for the other two pairs of properties, Genius album ID (P6217)/Genius artist numeric ID (P6351) and Genius song ID (P6218)/Genius song numeric ID (P6361).

Licence of data to import (if relevant)

N/A (presumed not copyrightable)

Discussion

All of the properties now exist. Jc86035 (talk) 10:57, 15 January 2019 (UTC)

Request process

Adding main subject (P921) to scholarly articles based on relevant keywords in the title and descriptionEdit

Request date: 17 January 2019, by: Thibdx (talkcontribslogs)

Task description

The goal of this bot is to add main subject (P921) to scholarly articles.

The metadata of scholarly articles in Wikipedias are quite hard to maintain by hand because the rate of creation of these articles exceed the capacity of the community to generate data. So that automation would be a great help.

In many case, finding a specific keywords on scholarly articles makes it obvious that it is a main subject of the article. This is the case for most technical terms that does not have double meaning.

For example :

A list of such pairs could be stocked in a protected wikipage. For each Keyword, the bot would search in scholarly articles and add the related main subject (P921) statement if the keyword is in the title. Of course, each pair would have to be tested first to ensure data consistency.

Human readable algorithme 
Wikidata:WikiProject Materials/ScholarTopicsBot
Getting the work done

If an experienced dev thinks it could be one of its priorities I would be glad to handle this to him. If not, I can try to do it myself. I'm not a dev at all. The only thing I did so far is modifiying some scripts. So that if you can help me by pointing the following examples it would be helpfull :

  • A bot that extract content from a wikipage
  • A bot that list Qids using a request
  • A bot that add statements to items

Regards

Discussion
Request process

Import Treccani IDsEdit

Request date: 6 February 2019, by: Epìdosis (talkcontribslogs)

Task description

At the moment we have four identifiers referring to http://www.treccani.it/: Dizionario biografico degli italiani Identifier (P1986), Treccani ID (P3365), Enciclopedia Italiana ID (P4223), Dizionario di Storia Treccani ID (P6404). Each article of these works has, in the right column "ALTRI RISULTATI PER", a link to the articles regarding the same topic in other works (e.g. Ugolino della Gherardesca (Q706003) Treccani ID (P3365) conte-ugolino, http://www.treccani.it/enciclopedia/conte-ugolino/ has links also to Enciclopedia Italiana (Enciclopedia Italiana ID (P4223) and Dizionario di Storia (Dizionario di Storia Treccani ID (P6404)). This cases are extremely frequent: many items have Dizionario biografico degli italiani Identifier (P1986) and not Treccani ID (P3365)/Enciclopedia Italiana ID (P4223); others have Treccani ID (P3365) and not Enciclopedia Italiana ID (P4223); nearly no item has Dizionario di Storia Treccani ID (P6404), recently created.

My request is: check each value of these identifiers in order obtain values for the other three identifiers through the column "ALTRI RISULTATI PER".

Discussion

Import alumni based on Wikipedia categoriesEdit

Request date: 10 February 2019, by: GerardM (talkcontribslogs)

Task description

Categories with "category contains" "Human" and "Educated at" "Whatever institution" are to be used to include education information in Wikidata. For many universities these categories have been initially imported manually.

Licence of data to import (if relevant)
Discussion


Request process
  • I want to revive my RobotGMwikt profile to run this. Thanks, GerardM (talk) 16:36, 10 February 2019 (UTC)
    So do. Matěj Suchánek (talk) 14:34, 15 February 2019 (UTC)

Golf video gamesEdit

Request date: 13 February 2019, by: Trade (talkcontribslogs)

Link to discussions justifying the request
Task description

Can someone please add the video game genre golf video game (Q60256879) to all video games in Category:Golf video games (Q8494058)? Trade (talk) 18:28, 13 February 2019 (UTC)

Licence of data to import (if relevant)
Discussion
  • Several entities in that category are series of video games, not individual games, and therefore should not have that statement added, I think? --Yair rand (talk) 05:13, 25 March 2019 (UTC)
Request process

Fuzhou Architecture HeritageEdit

Request date: 26 February 2019, by: Davidzdh (talkcontribslogs)

Link to discussions justifying the request
Task description
Licence of data to import (if relevant)
Discussion

Can anyone help me to import these data? Thank you.- I am Davidzdh. 17:31, 26 February 2019 (UTC)

Hi you'r welcome, I see you have already created a Google Spreadsheet. Can you move the data around so the format matches this spreadsheet? Then you can export the spreadsheet as CSV and upload it to QuickStatements and import the data yourself. Please tell me if you encounter any problems. --Pyfisch (talk) 10:19, 2 March 2019 (UTC)
Request process

wiktionaryEdit

hi please add wiktionary links to wikidata items example iran not any links in wiktionary Amirh123 (talk) 19:06, 26 February 2019 (UTC)

Wiktionary handles interwiki differently. Matěj Suchánek (talk) 19:37, 26 February 2019 (UTC)

Update P373 in several elementsEdit

Request date: 14 March 2019, by: Syrio (talkcontribslogs)

Link to discussions justifying the request

No discussion on this specific task (consensus was reached on Commons for the movements, of course), but it's just maintenance.

Task description

Hello! Recently, the categories in commons:Category:Churches in the Roman Catholic Archdiocese of Trento have been moved to a new naming standard; each category is tied to a Wikidata element, whose Property:P373 needs to be updated following the movement (it just should match the category's new name; a few of them already do, but most don't). Is it possible to do this via bot? In case, do I need to set up something else (i.e., a list of the wikidata elements, or anything)? Thank you, --Syrio posso aiutare? 11:26, 14 March 2019 (UTC)

Uuuh, noone? Is there any problem? --Syrio posso aiutare? 23:48, 22 March 2019 (UTC)
Would Mike Peel's bot help? Matěj Suchánek (talk) 18:15, 23 March 2019 (UTC)
@Syrio: if they have category redirects in place, they should be sorted out automatically by pi bot at the start of the month. If not, I can look into coding something custom for this. Thanks. Mike Peel (talk) 06:44, 24 March 2019 (UTC)
Oh, ok; yes, redirects do exist. Thank you! --Syrio posso aiutare? 08:58, 24 March 2019 (UTC)
Licence of data to import (if relevant)
Discussion


Request process

HumansEdit

The English-language description of female (Q6581072) is "human who is female (use with P21)", for male (Q6581097) it is "human who is male (use with P21)" (emphasis added). The equivalent also appears to be true in many other languages.

Therefore any source that supports the use of one of those items as a value for sex or gender (P21) logically also supports the use of human (Q5) as the value for instance of (P31).

A bot should be used to copy references from P21 to P31, in such cases, where no existing reference for P31 is found. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:17, 14 March 2019 (UTC)

  Oppose, original research. Sjoerd de Bruin (talk) 21:40, 16 March 2019 (UTC)
There is no original research proposed. Feel free to make a case, rather than an unsupported assertion, if you believe otherwise. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:38, 17 March 2019 (UTC)
That would cause problems with fictional characters, where sex or gender (P21) is also used. Fictional human characters use fictional human (Q15632617), not human (Q5) as values for instance of (P31). And IIRC separate items for sexes/genders of humans and organisms, e.g. male (Q6581097) and male organism (Q44148), were only created because some languages have different terms for male/female persons and animals. Unfortunately, it gets a bit complicated with non-human persons in fiction. Gandalf (Q177499) for example isn't human, but using male organism (Q44148) doesn't seem a good solution since that would apparently result in him being described with terms referring to animals in some languages. Therefore, most humanoid fictional characters use male (Q6581097)/female (Q6581072) at the moment. --Kam Solusar (talk) 22:06, 16 March 2019 (UTC)
It would not, because the proposal is restricted to cases where P31=Q5. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:38, 17 March 2019 (UTC)
Ah, my bad. Seems I totally misread your proposal and my mind immediately jumped somewhere else. :-/ --Kam Solusar (talk) 23:16, 17 March 2019 (UTC)
  Support per nom. --Tagishsimon (talk) 22:20, 17 March 2019 (UTC)
  Oppose I see a solution looking for a problem. No problem will be solved, but it will be harder to query items afterwards. Edoderoo (talk) 18:39, 10 April 2019 (UTC)
There is a problem: we have uncited statements. The assertion that "it will be harder to query items" is made without any supporting arguement. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:11, 10 April 2019 (UTC)
I mis-read the request to move the value of P21 into P31, but it's only the source/ref. But a bot can never check what the source is actually saying, so this bot/script will introduce errors or work on assumptions like if it's not mentioned it's a human, it must be a human. My assumption is: if there is a source for P21, people can assume that it can be worth to check if it can be used for P31 as well. The wikidata item is sourced, why isn't that enough? Edoderoo (talk) 05:25, 11 April 2019 (UTC)

Motorsports Hall of Fame inducteesEdit

Request date: 24 March 2019, by: Richard Arthur Norton (1958- ) (talkcontribslogs)

Task description

wikipedia:Category:Motorsports_Hall_of_Fame_of_America_inductees is about to be deleted at Wikipedia as "non defining". Could someone automate the task of adding "Award_received=Motorsports_Hall_of_Fame_of_America" as I did here at Donnie Allison (Q1241964)? I would hate to see the information lost. I will separately propose adding a link their biographies at their website as an "Identifier". --RAN (talk) 15:10, 24 March 2019 (UTC)

Discussion

See https://paws-public.wmflabs.org/paws/user/Edoderoo/notebooks/Add%20prize%20to%20category.ipynb This one can be helpful in more cases. Edoderoo (talk) 20:25, 10 April 2019 (UTC)

Request process

Update of population statement ranksEdit

Request date: 11 April 2019, by: Tkarcher (talkcontribslogs)

Link to discussions justifying the request

Help:Ranking#What_ranks_are_for

Task description

I added population figures of 2017 published by the World Bank Database to 139 preselected countries which did not have any figures for 2017 or later via Quickstatements. SPARQL Query showing the updated countries. Some of these countries only had one figure before (with normal rank), and some of them multiple figures with the most recent one ranked as preferred. Could someone please update the ranks and

  • rank the 2017 population figures as preferred
  • rank all older values as normal

Thank you! --Tkarcher (talk) 08:26, 11 April 2019 (UTC)

Discussion


Request process


Revert removal of name popularity mapsEdit

Request date: 19 April 2019, by: Infovarius (talkcontribslogs)

Task description

Please revert all removals of name popularity maps like this by Jura1. --Infovarius (talk) 15:14, 19 April 2019 (UTC)

Link to discussions justifying the request
Discussion


Request process

Adding WALS codes to language elementsEdit

Request date: 29 April 2019, by: SyntaxTerror (talkcontribslogs)

Link to discussions justifying the request
Task description

Hello. I got CSV lists from Robert Forkel from WALS of their codes corresponding to ISO 639-3 codes. It would be a good thing to add them with the Property:P1466 to the languages elements missing it.

The CSV lists are there: fr:Utilisateur:SyntaxTerror/wals-iso639-3.csv & fr:Utilisateur:SyntaxTerror/iso639-3-wals.csv

Notes :

  • Some WALS codes correspond to two or three ISO 639-3 codes (eg. Albanian language: WALS=alb, ISO 639-3=aln, als).
  • Some iso 639-3 codes correspond to several WALS codes (eg. Adi language: WALS=boj, boq, gal, mil, ISO 639-3=adi).
  • Some WALS code have only two letters.
  • There is not a WALS code for every ISO 639-3 code, but the CSV files list all the WALS code that correspond to an ISO 639-3 code.

Please notify me if you answer, as I am not often on WikiData. Regards, Şÿℵדαχ₮ɘɼɾ๏ʁ 00:55, 29 April 2019 (UTC)

Licence of data to import (if relevant)
Discussion


Request process

Removal of duplicate author name string statementsEdit

Request date: 30 April 2019, by: 129.13.72.197 (talkcontribslogs)

Task description

It would be nice if a bot could remove duplicate author name string (P2093) statements from scholarly-article items in those cases where author (P50) is filled correctly with a) the same series ordinal (P1545) statement and b) with a stated as (P1932) statement where the given string is the same as for the corresponding author name string (P2093) statement. A discussion for this is for example at Property_talk:P2093#Temporary_use?. 129.13.72.197 12:36, 30 April 2019 (UTC)

Discussion


Request process

ResearcherID (P1053) was replaced by Publons author ID (P3829)Edit

Request date: 4 May 2019, by: Kolja21 (talkcontribslogs)

Link to discussions justifying the request
  • "Web of Science ResearcherID is now on Publons." [3]
Task description

The ResearcherID (52.358 items outdated) should be replaced by Publons author ID:

Discussion
Request process

Accepted by (Edoderoo (talk) 16:50, 10 June 2019 (UTC)) and under process
Can you confirm that this is what you need?

If so, then I can run this script for all remaining items.

@Edoderoo: Yes, both edits are correct. Thanks! --Kolja21 (talk) 22:54, 10 June 2019 (UTC)
It's running, I guess until somewhere next weekend. Edoderoo (talk) 14:18, 12 June 2019 (UTC)

Task completed (19:11, 14 June 2019 (UTC))

@Edoderoo: Thanks for your work! Two questions:
  • I already started a task to fix that. I believe it is caused by another issue on the server today, the lag for queries is pretty high today, and my script didn't expect to find entries already edited before. Edoderoo (talk) 20:26, 14 June 2019 (UTC)
Cheers --Kolja21 (talk) 20:18, 14 June 2019 (UTC)


Magyarország közigazgatási helynévkönyve, 2018. január 1. (hungarian)Edit

Request date: 12 May 2019, by: Szajci (talkcontribslogs)

Link to discussions justifying the request
  • Sziasztok! Ezen a linken ([5]) elérhető a Magyarország közigazgatási helynévkönyve, 2018. január 1. című kiadvány. Van lehetőség arra, hogy a wikidatába beírja egy robot az adatokat? Kérlek titeket, írjon valaki valami biztatót
Task description
Licence of data to import (if relevant)
Discussion


Request process

Bot or template to add in Findagrave entries for new people not in Wikidata and for cemeteries not in WikidataEdit

Request date: 22 May 2019, by: Richard Arthur Norton (1958- ) (talkcontribslogs)

Task description
  • I would like it if I could have the ability to type in a Findagrave number and a bot/template would migrate the Findagrave data into a new Wikidata entry, it would also do a search on that Findagrave number to make sure we do not already have it in Wikidata. For instance I had to migrate https://www.findagrave.com/memorial/101150576 by hand.
  • We need the same for cemeteries not in Wikidata already. We really should have ALL cemeteries in Wikidata from Findagrave using a Mix and Match scenario, which I have never used before. I would take responsibility of making sure they are not duplicates. If the cemetery is already in Wikidata it adds in the Findagrave number. If not it creates a new entry. --RAN (talk) 23:16, 22 May 2019 (UTC)
Discussion

I have created an import from a CSV file before, to add certain data to WikiData. In my case it were tennis players missing two properties with qualifiers, that were time consuming to enter through the web-interface, but the same thing could be done for graves and cemetries. It looks like we can get some of the data straight from the website, if we just have a list of cemetries we want to create. Would that be an idea? Edoderoo (talk) 14:48, 8 June 2019 (UTC)

Request process

Optimize format of country itemsEdit

Given that these items get larger and larger, it might be worth to review their structure periodically and optimize their format, e.g. by moving references to separate items. Check for duplication, etc. --- Jura 13:33, 14 June 2019 (UTC)

Related: https://www.wikidata.org/wiki/Wikidata:Project_chat/Archive/2019/06#Metadata_and_reference_unification_for_Economics_and_possibly_other_projects --813gan (talk) 17:37, 17 June 2019 (UTC)

  • If <stated in>: <bla> is sufficient to locate the information within <bla>. I don't think all elements from the item <bla> should be repeated in the reference. --- Jura 14:47, 25 June 2019 (UTC)

Uploading Data to Retrosheet IDs: Q64615865Edit

Request date: 14 June 2019, by: Kbschroeder84 (talkcontribslogs)

Link to discussions justifying the request

https://www.wikidata.org/wiki/Talk:Q64615865

Task description

This is retrosheet IDs for baseball players. This is the largest depository of baseball data.

I received a copy/Google Sheet with the data, and I'm currently getting the Qid's to the baseball player's in the sheet.
Next steps will be to create a Property and import the data to this property.
Licence of data to import (if relevant) N/A
Discussion


Request process

Accepted by (Edoderoo (talk) 19:58, 16 June 2019 (UTC)) and under process

Removing invalid Billboard artist ID (P4208) statementsEdit

Request date: 17 June 2019, by: Tinker Bell (talkcontribslogs)

Link to discussions justifying the request
Task description

Remove all Billboard artist ID (P4208) statements matching the regex [0,9]{6}\/.{0,}

Discussion

I'm checking right now how many of these claims are left. If that looks good, then deleting them is adding a line of code. See my script here. Edoderoo (talk) 12:55, 19 June 2019 (UTC)

Request process

Accepted by (Edoderoo (talk) 15:29, 19 June 2019 (UTC)) and under process
There was one item protected against vandalism, that blocked my script. Now it finished it completely.
Task completed (12:31, 23 June 2019 (UTC))

Edoderoo, thanks, but, there are many cases matching the regex in Wikidata:Database_reports/Constraint_violations/P4208#"Format"_violations? The last update was at June 30 --Tinker Bell 02:39, 6 July 2019 (UTC)
The request was for six numbers (see the RegEx-example), but the ones left have seven digits. I see they don't work either, so my script runs again, now for 7 digits. Edoderoo (talk) 07:44, 6 July 2019 (UTC)
Let's now wait for the constraint-report to update. There must be progress. Edoderoo (talk) 11:56, 6 July 2019 (UTC)
Now there is only a few issues left, that can be handled best manually. Edoderoo (talk) 07:29, 19 July 2019 (UTC)

References to unreferenced citationsEdit

Request date: 18 June 2019, by: JJBullet (talkcontribslogs)

Link to discussions justifying the request

Wikidata:Requests_for_permissions/Bot/BulletBot

Task description

Add references to un-referenced citations

Licence of data to import (if relevant)
Discussion


Request process

BulletBotEdit

Request date: 20 June 2019, by: JJBullet (talkcontribslogs)

Link to discussions justifying the request
Task description

Add references to un-referenced citations, changes categories when incorrect, adds site links to enwiki, and lastly creates items if needed.

Licence of data to import (if relevant)
Discussion


Request process

Add description to items about articlesEdit

SELECT ?item
{
	?item wdt:P31 wd:Q13442814 . 
    OPTIONAL { ?item schema:description ?d . FILTER(lang(?d)="en") }
    FILTER( !BOUND(?d) )
}
LIMIT 10

Try it!

I seem to keep coming across articles that lack descriptions. If they had long titles, that wouldn't matter, but it's happens with articles that could be mistaken for items about topics. As I can't query them efficiently and just add descriptions with quickstatements/descriptioner, maybe a bot could run the above query every few minutes or so (once query server lag is gone) and add basic descriptions. If the standard description collides with another item, please add some variation. --- Jura 14:43, 24 June 2019 (UTC)

In English, most get a description during the import. But for people working on the other 300 language wiki's this ain't no help ;-) For Dutch I have given a big load of items a description already. A tool that can also be of help is Descriptioner. If you copy your query in here, it can set the descriptions for you in the background. Edoderoo (talk) 09:13, 25 June 2019 (UTC)
I'm aware of that. I just did a few with SELECT ?item { ?item wdt:P31 wd:Q13442814 } OFFSET n LIMIT 50000
Surprisingly, I even got up to offset 3,000,000. Still, even with this approach, a bot might be the better choice.
The other query needs to do even smaller steps of to avoid timeout.
Maybe there is a better way to identify them.--- Jura 11:02, 25 June 2019 (UTC)
I almost got to offset 4000000 before facing a timeout in descriptioner as well. --- Jura 11:50, 25 June 2019 (UTC)
and now the initial query times-out too. --- Jura 13:46, 25 June 2019 (UTC)

HDIEdit

set preferred rankEdit

When looking at the data for the previous request, it occurred to me that maybe the same as above should be done for Human Development Index (P1081) (most recent value should have preferred rank, all others normal rank). However, I don't use it myself and it's a different type of data. @IvanP: who seem to have worked with it. --- Jura 15:35, 29 June 2019 (UTC)

@Jura1: I just wanted to note that HDI estimates for certain years have changed, e.g., the HDI of Germany in 1995 was given as 0.830 at the time I added the value to Wikidata, now it is 0.834. Bodhisattwa added current estimates but the outdated ones should be deleted. (I am not familiar with OpenRefine yet and actually did the HDI stuff manually back then. 😲) -- IvanP (talk) 16:33, 29 June 2019 (UTC)
SELECT (URI(CONCAT("https://www.wikidata.org/wiki/",strafter(str(?item), "y/"),"#P1081")) as ?click) 
        ?year ?v ?url ?rank 
        ?statedin
WHERE
{
    BIND(wd:Q1025 as ?item) 
    ?item p:P1081 ?st . 
    ?st ps:P1081 ?v .
    OPTIONAL { ?st prov:wasDerivedFrom/pr:P854 ?url }
    OPTIONAL { ?st pq:P585 ?year } .
    OPTIONAL { ?st prov:wasDerivedFrom/pr:P248 ?statedin } .
    ?st wikibase:rank ?rank 
}
ORDER BY ?year

Try it!

An additional problem then. For some years we have multiple values and from the statements it's hard to say which one is which (see query above). The question is if they should be deleted, get deprecated rank or some "criterion used"-qualifier value (e.g. provisional).
Good thing 2017 has just one value ;). So we can set that preferred while sorting out the rest. --- Jura 23:28, 29 June 2019 (UTC)

Hi! The old value should be deprecated with "reason deprecation" = item/value with less accuracy (Q42727519) (check Help:Deprecation ). I'm operating the WDBot for the property nominal GDP (P2131). I use "retrieved" to note when the data was retrieved. If after time there is some revision for a old value, then it is easy to check which one is the most actual and the old one can be deprecated. Cheers! Datawiki30 (talk) 19:10, 1 July 2019 (UTC)

fix multiple values per yearEdit

still todo --- Jura 23:24, 19 July 2019 (UTC)

"This work/study/research was supported"Edit

Request date: 8 July 2019, by: Steak (talkcontribslogs) More than 200 journal article items have in the title a phrase like "This work was supported...". I all cases I checked, this was errorously used as a title part.

Task description

Can a bot recrawl the correct journal titles or at least remove this silly phrase? Steak (talk) 20:28, 8 July 2019 (UTC)

Licence of data to import (if relevant)
Discussion


Request process

Wiktionary: connect wikt:it:Categoria:Composti degli hanzi with wikt:fr:Catégorie:Termes en chinois par caractèreEdit

Request date: 17 July 2019, by: Barbaking (talkcontribslogs)

Link to discussions justifying the request
  • no discussion really, it just seems logic :)
Task description

hi, I just found out that on wiktionary, the subcats of wikt:it:Categoria:Composti degli hanzi correspond to the subcats of wikt:fr:Catégorie:Termes en chinois par caractère; see i.e. wikt:it:Categoria:Composti di 中 in cinese and wikt:fr:Catégorie:Caractère 中 en chinois. I would then like to ask if it is possible to connect every (it) Categoria:Composti di X in cinese with the corresponding (fr) Catégorie:Caractère X en chinois. There are nearly 600 subcats in the italian category, doing this manually would be a nightmare... thanks, --Barbaking (talk) 08:42, 17 July 2019 (UTC)

Discussion


Request process


Monthly number of subscribersEdit

At Wikidata:Property proposal/subscribers, there is some discussion about various formats for the number of subscribers. For accounts with many subscribers, I think it would be interesting to gather monthly data in Wikidata.

Using format (D1) this could be added to items such as Q65665844, Q65676176. Initially one might want to focus on accounts with > 100 or 50 million subscribers. Depending on how it goes, we could change the threshold.

I think ideally the monthly data would be gathered in the last week or last ten days of the month. --- Jura 14:22, 19 July 2019 (UTC)

Adding the identifiant NosDéputés.fr (P7040)Edit

Request date: 20 July 2019, by: Tyseria (talkcontribslogs)

Link to discussions justifying the request
  • No discussion
Task description

Hi, is it possible that a bot adding the NosDéputés.fr identifiant (P7040) at pages linked at member of the French National Assembly (Q3044918) and 15th legislature of the Fifth French Republic (Q24939798)/14th legislature of the Fifth French Republic (Q3570385)/13th legislature of the Fifth French Republic (Q3025921) ?
Examples with differents names :

Sorry it's my first request and I do not know how to do it :) Thanks!

Licence of data to import (if relevant)
Discussion


Request process

Add has part (P527) and part of (P361) to Romanian monumentsEdit

We seem to have plenty of items for monuments in Romania, where one is for the entire group (e.g. Q18545143) and several for each part (e.g. the ones Q18545143#P527). I think it would be helpful if these were linked more systematically. --- Jura 12:05, 21 July 2019 (UTC)