User talk:GZWDer/2019

Active discussions

翻译通知:Wikidata:WikiProject CanadaEdit


您收到此通知,是因为您作为一名中文的翻译者在Wikidata注册了。页面Wikidata:WikiProject Canada已提供翻译。您可在此翻译:




Wikidata翻译协调员‎, 01:58, 6 January 2019 (UTC)

翻译通知:Wikidata:WikiProject CanadaEdit


您收到此通知,是因为您作为一名中文的翻译者在Wikidata注册了。页面Wikidata:WikiProject Canada已提供翻译。您可在此翻译:




Wikidata翻译协调员‎, 15:01, 6 January 2019 (UTC)

翻译通知:Wikidata:WikiProject CanadaEdit


您收到此通知,是因为您作为一名中文的翻译者在Wikidata注册了。页面Wikidata:WikiProject Canada已提供翻译。您可在此翻译:




Wikidata翻译协调员‎, 15:01, 6 January 2019 (UTC)

Project chatEdit

Some of your work is now being discussed at project chat - grateful if you'd join in. --Tagishsimon (talk) 14:55, 11 January 2019 (UTC)

ID LinkEdit

Hello.Please make examples links when proposing an external ID property.Thank you David (talk) 07:42, 20 January 2019 (UTC)

Aritmetica di Presburger Wikidata itemEdit

Hello, I provided to connect Italian translation of it:Aritmetica di Presburger with d:Q956059 Wikidata item, which is already present (the article has 11 language versions). So I assigned the item d:Q60841572 you created to another page. My suggestion is, before creating a new one, to check that the item doesn't already exist, because maybe the article is written in other language (a lot of articles are translations from English, for example). Thank you :-), --Camelia (talk) 11:07, 23 January 2019 (UTC).

Please review the action of your botEdit

Your administered Bot GZWDer (flood) regularly creates Wikidata-Objects of articles in german Wikiepdia, without taking account whether there exists already a WO by other wikipedias. This is extremely annoying, as to merge two existing WO it requires a lot of work. As such, please stop your bot doing this. Thanks.--Maphry (talk) 17:03, 24 January 2019 (UTC)

@Maphry: Creation of such items makes duplicates more easy to find and data more easy to add (see here for previous discussion). However, if German Wikipedia community wants to do all item connection (and creation) manually, please let me know (with a community discussion about not using bot to do that).--GZWDer (talk) 17:10, 24 January 2019 (UTC)
  Oppose Creating duplications without any fair reason can only waste Server memories, and if one day Wikidata has faced-to-faced a 504 error, you're the person who broke Wikidata, and hence you'll face-to-face a set of jurisdiction actions. -- 08:47, 25 March 2019 (UTC)

Empty species itemsEdit

Instead of creating an empty item, with only a label and a site link like Quasimitra floccata (Q60792733), please add all of the basic taxonomic related properties (taxon, name, rank, and parent). If not, please leave this for someone more capable to perform. —Tom.Reding (talk) 19:51, 24 January 2019 (UTC)


Hi! You have created a item for a category redirect. See more in pt:WP:CP#Moção de categorias e Wikidata (in portuguese). --Luan (talk) 15:28, 26 January 2019 (UTC)

@Luan: I did not notice the practice that each non-empty category redirects are categorized in their targets. You may check other pages in w:pt:Categoria:!Redirecionamentos de categorias não vazios too. It's better if there's a bot to recategorize pages in them.--GZWDer (talk) 19:05, 26 January 2019 (UTC)
But there is a bot that recategorizes the pages (Alch Bot) after a cooldown period of 1 week. You did two times only about Q49677163 (Q61016763 and Q61025228). You can skip this category when want to create new item; also can check other cases to correct them (merge the duplicates). --Luan (talk) 13:58, 28 January 2019 (UTC)


中文维基百科|抖音_(消歧义)Q61056110。--XL-028 (talk) 20:08, 28 January 2019 (UTC)

Woman's Building (Q41165043) / The Woman's Building (Chicago) (Q55635638)Edit

Hi GZWDer, Would you be so kind as to merge The Woman's Building (Chicago) (Q55635638) with Woman's Building (Q41165043)? They are the same building. I can't figure out how to merge. Thanking you in advance, WomenArtistUpdates (talk) 17:52, 29 January 2019 (UTC)

@WomenArtistUpdates: See Help:Merge.--GZWDer (talk) 17:53, 29 January 2019 (UTC)
@GZWDer: Each one teach one. Thanks! WomenArtistUpdates (talk) 18:01, 29 January 2019 (UTC)

Two items are essentially the same thingEdit

Hi. You created "Santa Cruz Museum of Art and History" (Q60739947) @ 14:16, 21 January 2019‎. It's essentially the same entity as "Museum of Art and History" (Q6940760).

I was trying to undo the edit that was automatically made here in my name @ 20:20, 5 November 2018‎ Wbm1058 . . (8,406 bytes) -118‎ . . (‎Page on [enwiki] deleted: Santa Cruz Museum of Art and History)

because I temporarily deleted that page on enwiki to do a history merge. Your edit is standing in the way, so I can't undo this edit that was made here in my name as a side effect of my enwiki editing.

And yes, I just noticed the section above this. Sorry, TL;DR. I'm too busy on English Wikipedia to take time for this. This is just an "FYI", do with it whatever you wish. Thanks, Wbm1058 (talk) 03:50, 31 January 2019 (UTC)

@Wbm1058: All fixed. Graham87 (talk) 05:47, 31 January 2019 (UTC)

Duplicate creation of TV episode The Conjugal ConfigurationEdit

This bot created Q60745816 on January 21st, 2019 when Q56747765 existed already since 2018.-- 09:59, 2 February 2019 (UTC)

I have merged the 2 pages for you. Redalert2fan (talk) 16:33, 2 February 2019 (UTC)

Duplicate additionEdit

Hello, your bot created Q61059508 with the name "Jennie Kim" for a page on simplewiki while Q26262599 with the same name; "Jennie Kim" already existed since 2016. The older Q was already links to numerous other languages. I have merged the 2 pages. Thanks, Redalert2fan (talk) 16:25, 2 February 2019 (UTC)

Czech RepublicEdit

"market town in Zlín Region, Czech Republic"! Matěj Suchánek (talk) 09:34, 3 February 2019 (UTC)

Request from zh_yuewikiEdit

Hello. Your bot has not visited zh_yuewiki for a few months. Would you mind arrange your bot to help creating items for zh_yuewiki on a regular basis (say every 1 or 2 months)? Thank you.--Kowlooner (talk) 15:10, 10 February 2019 (UTC)


of this --Succu (talk) 22:42, 12 February 2019 (UTC)


叨扰了。我手头有一批离线的图书表格,数据由我自己所录入,包含ISBN、统一书号、作者、出版社、出版与印刷日期、对应的豆瓣编号、作品所使用语言等等。注意到维基数据有图书项目,希望这些数据能为维基所用。您曾经提醒过我数据库的事情,相比您在数据库的技术上颇有研究,故向您请教,望能指点一二。- I am Davidzdh. 16:13, 18 February 2019 (UTC)

Self-referencing statementsEdit


I saw and reverted some strange self-referencing statements referenced from the China Biographical Database (Q13407958), qv. Dong Kui (Q45628335). For a next time, can you make sure to not do such self-reference? (except maybe in rare case for time-travellers or gods, someone is never is own father or child).

Cdlt, VIGNERON (talk) 17:33, 18 February 2019 (UTC)

Duplicated elementEdit

Hi! I've just found a duplicated item created by your bot. You can see the history here. It wasn`t a big problem, but this information may be useful if you want to improve your bot performance.Maria zaos (talk) 00:05, 3 March 2019 (UTC)

Item with no statementsEdit

Your bot created Q61126218, as an empty item, with only a label and a site link from hewiki. Please don't create items for such articles, they are low-quality content. Thanks Keren - WMIL (talk) 14:47, 3 March 2019 (UTC)

Ukrainian WikipediaEdit

Do not touch the Ukrainian Wikipedia.

Mi Mi? Mi? Mi? Micsoda? Magyar Wikitanács Egyesület (talk) 14:36, 24 April 2019 (UTC)

Sajnos nem értem Magyar Wikitanács Egyesület (talk) 14:36, 24 April 2019 (UTC)


This item (about me!) ought not to exist on its own like this. ORCID is a self-published source, so while it could be considered a reliable authority control identifier, it does not confer any notability. You shouldn't create mass-dumps of ORCID entries onto Wikidata unless they are linked from/to another item (e.g. a journal article) or they also fulfill other criteria on Wikidata:Notability. Deryck Chan (talk) 16:22, 7 May 2019 (UTC)

bot created a duplicate disambig for labelEdit

Hi, Your bot created the duplicate disambig Q61419800 with label Pleso, at that time disambig Q2099336 Pleso had already existed. Pls, check label existence first. MikePC (talk) 11:40, 10 May 2019‎ (UTC)

Legal case names and disambiguationEdit

Your (flood) account has created a variety of items for legal cases in the United Kingdom. These often have the label simply of "R". This is because in judicial review cases, the cases are often titled "R (on the application of Smith) v Some Government Department". The code importing these pages probably look and see for the first thing in brackets, conclude it is a Wikipedia article title disambiguation label (like "John Smith (singer)", split the string at the point of the first parenthesis, and use everything before it as the item name.

This did not work for the following items, which I've now fixed:

I've also created a SPARQL query that should now return zero results to verify that no such instances are created again. I hope this is useful in improving item creation in the future. —Tom Morris (talk) 16:57, 6 June 2019 (UTC)

Eucalyptus farinosaEdit


Thanks for your work. I only edit Wikipedia plant pages and do not have your skills here, but you appear to have created a duplicate page in Wikidata for Eucalyptus farinosa. 00:15, 6 July 2019 (UTC) (Gderrin)

GZWDer (flood)就此打住!Edit

不要再给正在存废的页面创建数据项了,这样会产生很多垃圾的!"Society Person" Peppa Pig (talk)?

New empty itemsEdit

Hi, can you please stop creating empty items which only include 1 sitelink on Czech Wikipedia such as Q65214932? We are used to creating them manually (and add relevant information during that process). --Vojtěch Dostál (talk) 10:55, 9 July 2019 (UTC)

@Vojtěch Dostál: Does any people actively monitering Special:UnconnectedPages? In Dutch Wikipedia, there're people actively cleans up items without claims too.--GZWDer (talk) 11:00, 9 July 2019 (UTC)
Yes, a lot of people do that. Most items get created within one week. It is much more comfortable than going through a long list of 1500 items with cs sitelink but no claim :). Also, your bot created at least two duplicates (items on the given subject already existed) which I had to merge. --Vojtěch Dostál (talk) 11:03, 9 July 2019 (UTC)
@Vojtěch Dostál: Dutch Wikipedia people are pretty active linking up the unconnected pages to existing and new items, but there are still some backlog. In this case a bot will create a new item if the article is at least 28 days old and hasn't been edited for over 21 days. Someone from Czech Wikipedia should run a similar bot too (JAnDbot may did it, but I'm not sure), and watch the backlog it created. It's very suboptimal to leave a page unconnected for a long time.--GZWDer (talk) 11:11, 9 July 2019 (UTC)
Yes, JAnDbot does that after about a month which is an adequate time interval --Vojtěch Dostál (talk) 11:13, 9 July 2019 (UTC)

Hi! Please, delete the item, you have created, it needs to be merged with Q16828606. 10:57, 16 September 2019 (UTC)

Bot connecting to wikidataEdit

Hey, can your bot which connects pages in to WikiData items also connect episode redirects to their wikidata item? --Gonnym (talk) 10:20, 21 July 2019 (UTC)


Hi, could you please stop your bot "GZWDer (flood)" from crating wikidata item based on articles from the Bulgarian Wikipedia. It's not helping at all - we have to revew every created item manually, merging it if needed, adding instances, etc. We do not have so much new articles on daily basis, and we already use a tool to add new articles to wikidata in a way better & complete way. --StanProg (talk) 10:36, 29 July 2019 (UTC)

@StanProg: Does your wiki have any user or bot dealing with backlog of Special:UnconnectedPages?--GZWDer (talk) 15:11, 29 July 2019 (UTC)
We are using Duplicity for this purpose. I've written to the community maybe an year ago about it, but I'm not sure if at the moment someone, except me is using it. We're also using the WE-Framework. If we already have objects in Wikidata, we can't these articles in Duplicity and we have to execute heavy SPARQL queries to get a list, like this one [1]. We do not have so much new articles on a daily basis, so even a single contributor can handle them manually. --StanProg (talk) 16:08, 29 July 2019 (UTC)

Stop creating emty itemsEdit


you created Q65159779 and later i looked for a entry "Rapel Basin". As no one was shown I created Q66309135 . As I wanted to enter the es:wikipedia link wikidata responded "it is already in Q65159779".

It is not the first time. Please dont tell me about Special:UnconnectedPages.

Please, stop creating empty items because they cant be founded.

Thanks. --Juan Villalobos (talk) 18:27, 7 August 2019 (UTC)

  • @Juan Villalobos: Unless someone is actively monitering Special:UnconnectedPages, creating items for them have benefit for both future improvement and duplicate detection.--GZWDer (talk) 02:59, 8 August 2019 (UTC)
In this case the incognito item causes more work than less work. --Juan Villalobos (talk) 11:08, 8 August 2019 (UTC)
Absentation of items does not make things better as many tools need an existing item to work.--GZWDer (talk) 11:59, 8 August 2019 (UTC)
OK. I see that it is important and several times I used the items created by bots. I should improve my search string, looking for " Elqui " first. Until now I looked only for the English name.
Thanks you for your work for Wikidata. --Juan Villalobos (talk) 11:08, 9 August 2019 (UTC)


Nice to see your idea about a Large Dataset Bot. I look forward to seeing it in action. Sj (talk) 12:39, 13 August 2019 (UTC)


Hi, FYI: I granted a temporary bot flag for one week in order to let your but run 500-1000 test edits. Lymantria (talk) 05:52, 16 August 2019 (UTC)

I don't have time right now. I will do it in Sunday or Monday.--GZWDer (talk) 06:22, 16 August 2019 (UTC)
All right, that is inside the span of one week. Lymantria (talk) 13:12, 17 August 2019 (UTC)
The request has been approved today. Lymantria (talk) 07:11, 30 August 2019 (UTC)

Buildings un BilbaoEdit

Hi. En 2016 you create a lot of buildings from Bilbao,but without "instante of". Do you have the list to know every element? Thanks --Vanbasten 23 (talk) 06:06, 4 September 2019 (UTC)

Slowing downEdit

Hey. Your bot creates 2-5 pages per second in Wikidata. It's disrupting our services, please slow down. Amir Sarabadani (WMDE) (talk) 10:42, 11 September 2019 (UTC)


You don't have to nominate them one by one. Once import is finished, I'll take care of them Ghuron (talk) 18:15, 23 September 2019 (UTC)

Thank you for hiding my passwordEdit

Is there a way to hide it from old versions too? Uziel302 (talk) 17:38, 5 October 2019 (UTC)

@Uziel302: 1. You must change the passwords of all accounts (Wikimedia or not) with such password, as it is already compromised. 2. See WD:OS.--GZWDer (talk) 17:40, 5 October 2019 (UTC)
I changed passwords of central login of all accounts. Thanks.Uziel302 (talk) 17:43, 5 October 2019 (UTC)

Names of Scholarly Articles with BracketsEdit

Hello GZWDer,

you add with your LargeDatabaseBot scholarly articicles and the lables of some of them are in Brackets so they start with a [ and end with ]. Here is an example Template:Q71159098. Why are some Labels at PubMed with brackets. I think as a user I would not search for a name with this brackets. Are you in Contact with them or how do you think is it possible to find that scholarly article when I enter the name without the brackets. I think in the search when adding a statement you cant find items who dont are the same in Label or Alias. It is not so easy to find it now in the way I described. When you use the search in upper right corner of Wikidata it is possible to find it. But I think that is no problem but please look why they are in Brackets and look if it should be changed. -- Hogü-456 (talk) 18:20, 16 October 2019 (UTC)


Could you create a page for Banhxeochelys. I don't know how to create a page from blank. Thanks. Sun Creator (talk) 23:05, 20 October 2019 (UTC)

done as Q71970151. Sun Creator (talk) 19:58, 22 October 2019 (UTC)

suggest meta user page for user:GZWDer (flood)Edit

Hi. With this bot editing at wikidata, yet the user links showing on each wiki as a red link, can I ask for you to consider creating a user page for your bot at metawiki. It will enable users to better understand the edits being made. Thanks.  — billinghurst sDrewth 21:32, 31 October 2019 (UTC)

If you need assistance getting it done at meta, then please welcome to ping me.  — billinghurst sDrewth 21:33, 31 October 2019 (UTC)

New chemical compoundsEdit

Using your bot you created a lot of items with incorrect CAS number, check this. Also, it seems a lot of them may be duplicates of existing items. When you are going to fix it? Wostr (talk) 18:24, 7 November 2019 (UTC)

@Wostr: Before I created these items I have dumped all CAS numbers already at Wikidata. Actually these constraint violations involves compounds with multiple CAS number. This may be fixed by spliting them to multiple values, but are any of these CAS numbers preferred? (if some is preferred, they should be set as best rank). i.e. Are number listed in ChemIDplus "primary" in any ways?--GZWDer (talk) 18:38, 7 November 2019 (UTC)
The only option to be sure which number is correct is to check SciFinder. AFAIK for many compounds it's not the case of 'primary'/'secondary', but rather 'correct'/'withdrawn' (e.g. compound was also listed using synonym, one CAS number is now correct, the other has only historical value) or CAS numbers are for the same compound in different forms (like inorganic compound may exists as different minerals). EDIT: I think secondary CAS numbers should be added rarely and only in special and justified cases.
These numbers should be added as multiple values (if at all; I don't think it's necessary), but first: each CAS number should be checked before adding, but it seems it wasn't. Just a few examples: atrazine (Q408652) (CAS 1912-24-9) and your new item atrazine (Q72444066) (CAS '1912-24-9 (93616-39-8)'); (±)-β-citronellol (Q27122080) (CAS 106-22-9) and (±)-β-citronellol (Q72461012) (CAS '106-22-9 (26489-01-0)'). Format 'CAS number (second CAS number)' is a total nonsense and should not be allowed to be added since it is mandatory constraint (Q21502408).
Right now first CAS number from your format 'CAS number (second CAS number)' should be checked against WD and then duplicates (a lot of them I think) should be merged into older items. Wostr (talk) 19:06, 7 November 2019 (UTC)

User:GZWDer (flood) creating duplicatesEdit

Hello. Your bot is certainly very useful, but is currently creating several items (possibly very many) that duplicate already existing items. See Sir Richard Williams-Bulkeley, 10th Baronet (Q75386327) which duplicates Sir Richard Williams-Bulkeley, 10th Baronet (Q7528680), and Loel Guinness (Q75256393) which duplicates Loel Guinness (Q7786818). It appears your bot is currently using The Peerage person ID (P4638). Would it be possible for your bot to check if a prospective item already has an identifier from a similar database, such as Kindred Britain ID (P3051) or person ID (P1819) or if not, is there a plan to automatically merge duplicates (or at least identify possible matches based on same birth & death dates)? With large databases like The Peerage (over 600,000 entries), a stitch in time can save nine-thousand. Thanks, -Animalparty (talk) 00:29, 19 November 2019 (UTC)

I have already checked all items with a child with Peerage ID. More checks may be done once statements are imported.--GZWDer (talk) 00:45, 19 November 2019 (UTC)
As long as the duplicates get merged (sooner rather than later), there's little issue. Cheers, -Animalparty (talk) 04:13, 19 November 2019 (UTC)
And a followup question: once the bot is done adding (all?) The Peerage entries (and all duplicates are merged), would it be possible for the bot to automatically add the important lineal connections (i.e. child (P40), father (P22), mother (P25))? -Animalparty (talk) 20:13, 20 November 2019 (UTC)
Another one: Paul de Rafélis de Saint-Sauveur (Q75360610) (and Paul de Rafélis de Saint-Sauveur (Q20894018))). Ayack (talk) 09:13, 25 November 2019 (UTC)
@Ayack: Now you merged the description "British peer or relation" to the existing item. See Wikidata:Project_chat#"British_peer_or_relation_"_description_cleanup --- Jura 11:06, 25 November 2019 (UTC)
@Jura1: I agree this description is a nonsense. But 1. I don't edit descriptions in English, only in French, 2. I find this system of static descriptions absurd, 3. I think that everyone is responsible for thir errors and 4. I didn't even noticed it when I merged the items. Ayack (talk) 11:19, 25 November 2019 (UTC)
The problem with the "en" ones is that's what everyone gets first. No problem about (4), I didn't think you had to. As the user who added them does seem to fix the 150,000, I think it would be good to get more feedback on project chat. --- Jura 11:37, 25 November 2019 (UTC)
Another one: Bohemond V of Antioch (Q75571846) (and Bohemond V of Antioch (Q349102)). Ayack (talk) 11:56, 25 November 2019 (UTC)
Also almost all the children of Robert Guiscard (Q203792). Ayack (talk) 17:00, 25 November 2019 (UTC)

In addition to the duplicate items, your bot is also adding duplicate values; children or parents that are already listed: see children of Frank Jay Gould (Q3082670) and father of Dorothy Gould Burns (Q5298437). I have no idea how widespread this problem is: it may only affect a portion of items that were edited while GZWDer (flood) was flooding. Or it may affect thousands. What are you going to do to fix this? -Animalparty (talk) 20:54, 25 November 2019 (UTC)

  • I think krbot will merge this. Checking for childrens that don't have identifiers might be a good way to find dups .. obviously, it's not always easy to determine if they are actually duplicates. Some children might have the 5th given name identical with the 1st of their sibling. --- Jura 21:08, 25 November 2019 (UTC)
  • I just merged Q75353787 and Q75239670. I list them here as it may help identifying others that could be merged. Given the scale of the import, I haven't looked into it sufficiently to consider that the rate of duplicates is of concern. --- Jura 09:59, 27 November 2019 (UTC)
All duplicates are of concern. All double entries are of concern: they clutter infoboxes with redundancy. The scale and rate of the import is of concern. -Animalparty (talk) 17:48, 27 November 2019 (UTC)

Also not just duplicate items, but also duplicate relationships. See this. How can the bot be fixed to address that? Miraclepine (talk) 14:03, 27 November 2019 (UTC)

  • The de Valera family seems to have been partially duplicated as well, maybe due to some of the prefixes in labels. Good thing we stopped the batches before they all got marked "British peer or relation". --- Jura 06:10, 29 November 2019 (UTC)
Please also see this Help_talk:Merge#Help_with_merging. --Kjeldjoh (talk) 09:56, 29 November 2019 (UTC)
and Wikidata:Database reports/identical birth and death dates get some them. --- Jura 01:13, 11 December 2019 (UTC)

Prefixes in labelsEdit

Hon\\.|General|Major|Lieutenant|Admiral|Air Marshal|Lady|Sir|Lord|Captain|Col\\.|Colonel|Commander|Dr\\.|Reverend|Rev\\.|General Sir|(Maj|Lt)\\.\\-(Gen|Col)\\.|Rt\\.Hon\\.

Hon. GZWDer,

Some labels include prefixes generally not used in labels. Above a few I found. I think it would be preferable to use that format as alias only. --- Jura 09:50, 27 November 2019 (UTC)

Sometimes the prefix "Sir" is required, see w:Wikipedia:Naming_conventions_(royalty_and_nobility)#British_nobility; some other labels also have a prefix (Lady Elizabeth Stanley (Q6470130)). A clean up is OK but is not in the highest priority.--GZWDer (talk) 14:03, 27 November 2019 (UTC)
  • I think that's mainly enwiki dab. There are also some 1000 with a "/" as second character: Q75877437. --- Jura 19:36, 27 November 2019 (UTC)


Please be careful with the creation of impossible self-references, e.g. that a person is their own parent and/or their own child. Whatever editing process or bot code you are using, it should be possible to check for impossibilities like this before creating statements. Thanks! --Jamie7687 (talk) 16:27, 27 November 2019 (UTC)

  • @Jamie7687: This is because someone added his father's Peerage ID to the item for his.--GZWDer (talk) 16:34, 27 November 2019 (UTC)
    • Isn't it still possible for you or your bot to check before adding a statement like this? Looking at my edit history, I have undone at least 12 of these self-referencing statements from your account, so I would greatly appreciate if you could check for these issues before creating statements. Thanks again, Jamie7687 (talk) 16:47, 27 November 2019 (UTC)

page(s) (P304)Edit

目前该属性翻译为“页码”,但考虑到近期folio(s) (P7416)已经创建,而folio直译为中文也是“页码”,而这两者会产生冲突,本站从技术上讲也不允许两个属性使用相同名字(包括翻译名),所以我想知道阁下对页码一词指向性的意见,是应该保留P304既有翻译,P7416另选合适翻译?还是修改P304翻译,页码改用做P7416?--Liuxinyu970226 (talk) 14:09, 29 November 2019 (UTC) (To non-Chinese users: Currently the page(s) (P304) is translated as "页码" in Simplified Chinese, and "頁碼" in Traditional Chinese, but because of the recently created folio(s) (P7416), now there's a problem that, if directly translating "folio", then that should also be "页码/頁碼" which results confliction, and by technically Wikidata will not allow two or more properties to use purely same name (also applying to translations), so I would love to know that how do you consider the original points of "页码/頁碼", should P304 continue to translate as "页码/頁碼" and consider another proper translation of P7416? Or should we alter the P304 translation, and release "页码/頁碼" for P7416 instead?) --Liuxinyu970226 (talk) 14:09, 29 November 2019 (UTC)

Your skillsEdit

Your skills are required on this page and this page. Cordially. —Eihel (talk) 02:06, 3 December 2019 (UTC)

Very weird matchingEdit

Thanks for the Peerage import - it's really helpful to be able to see so many family links emerging. I've been going through cleaning them up and discovered a very weird pair of uploads:

Both the "correct" items had the Peerage ID links. Looking at the numbers, I am guessing this was some kind of a glitch with the item IDs but thought I'd let you know about it in case it cropped up somewhere else. Andrew Gray (talk) 11:51, 4 December 2019 (UTC)

@Andrew Gray: Before I imported new items I checked all people with father or mother without Peerage ID. Sometimes there's other items with a same Peerage ID as the ID intended to be added (e.g. Q61754324 is to have a same ID with Q54867884). This may be either duplicate, or a mistake in Wikidata or The Peerage (all happened). As I maintain a SQL database tracking all Qid of peerage entries (for easier querying), it must be updated manually using UPDATE bio SET q='Qxxx' WHERE num_id=yyy. Once all new items are created, the database was queried to form a list of statements to add (by joining with another table containing the child information extracted from The Peerage.--GZWDer (talk) 14:28, 4 December 2019 (UTC)

The peerage duplicatesEdit

Import of peerage database was great for WikiTree linking to WikiData. I do match most probable items on both ends. In last month several thousands were added. But I noticed, that you are creating quite a few duplicates, that sows up as Unique value violation on WikidataID property (P2949). Here is a list of 170 duplicates with possible duplicates in relatives. Most of them have one of new items in the mix. According to SPQR-QL [2] there is 600 of them.

I don't know exactly how to address this. Obviously they need to be merged, but I don't really have time to do that.

@Lesko987a: If you are certain that they are the same (usually you should check these items first), See Help:QuickStatements#Item_merging for ways to mass merge. Note some "duplicates" actually involve two established (different) item.--GZWDer (talk) 19:45, 6 December 2019 (UTC)

The Peerage and notabilityEdit

I read that The Peerage data base contains about 600,000 people, and that all of them have been imported, or in the process to be. Do all those people satisfy the Wikidata criteria of notability? Or is it enough to be included in The Peerage to be considered notable? Many entries in The Peerage have for only information that they are child of X, or spouse of Y. In such a case, any person with a well maintained profile in any (serious) genealogical data base can be considered notable and added to wikidata. Or do I miss something?Bvatant (talk) 23:34, 6 December 2019 (UTC)

Wikidata is intended to be a genealogical database, though people can be added only if there's source. Many people is described in books like Burke's.--GZWDer (talk) 23:40, 6 December 2019 (UTC)
Wikidata is NOT an Genealogy database, its not enough that you have a source.... in Sweden we have household records since 1600 but that doesnt mean that all should be in Wikidata please speak with the WikiTree people or set up your own Wikibase. Items like this John Wadman (Q76205668) is what you call has a source.... Listen to Lydia speaking in Berlin how Wikidata project see the future... and dont invent your own rules - Salgo60 (talk) 18:54, 7 December 2019 (UTC)
"It refers to an instance of a clearly identifiable conceptual or material entity. The entity must be notable, in the sense that it can be described using serious and publicly available references." so it's enough for everyone described in Burke's to have an item as they are described by Burke's.--GZWDer (talk) 19:14, 7 December 2019 (UTC)
I assume you didnt click on the video were its stated that not everything should be in Wikidatas see this this graph of Q76205668 what is the added value mass importing this to Wikidata. I guess not adding a description is an indication that you yourself know nothing about the object or is it a mistake.... - Salgo60 (talk) 20:06, 7 December 2019 (UTC)
This is crazy
- Salgo60 (talk) 20:22, 7 December 2019 (UTC)
Right now Denny Vrandečić speaks about the Wikidata project see tweet
"explains that Wikidata, like Wikipedia, aims not to decide what is true but to share what is verifiable in reliable sources."
with this latest import I feel you added less "reliable sources" and the question is can Wikidata handle that - Salgo60 (talk) 10:53, 10 December 2019 (UTC)
Thanks folks for this very informative exchange. Now what I'm wondering about is the decision process. I have never imported data in Wikidata, have not the technical skills to do it, or even to begin to understand how it works, but I can understand the decision process. I suppose you have to ask somewhere/somebody before making such an import. Whom do you ask? What is the decision process? Because I suppose the debate I see here should have taken place before starting the actual import. Did it? Bvatant (talk) 22:11, 11 December 2019 (UTC)
And BTW, GZWDer, thanks for notifying me about my forgotten Universimmedia account. I put a few historical details on it, making me a bit nostalgic of the early days of Wikipedia, 20 years ago, when trust and hope were ruling (I'm afraid I've lost both, but maybe it's only the old age).Bvatant (talk) 00:22, 12 December 2019 (UTC)
@Bvatant: we also discussed this on Project Chat....
One possibility is to ask Alés not to report diffs with WikiTree when the only source in Wikidata is The Peerage person ID (P4638). I also feel sad that Wikidata will not be part of the cool tool Alés has built... but when I read the WikiTree discussion and sit and check 139 errors can takes months if you are serious in genealogy. I miss that Wikidata has better control if a source is worth trust or not see my text about the need of quality - Salgo60 (talk) 00:43, 13 December 2019 (UTC)
A model of source quality is important, but another thing is how can this be expressed in a queryable way (how to surface the concept to data users). Wikipedia used numbers of lower-quality sources too.--GZWDer (talk) 01:08, 13 December 2019 (UTC)
@Salgo60: You meet an idea I have had for a while and will soon propose inside WikiTree, to have a "quality stamp", backed by a to-be-defined but serious peer-review process. Profiles with such a "quality stamp" could be pushed to Wikidata as quality data. That would make for a machine-readable quality stamp GZWDer is asking for.Bvatant (talk) 10:37, 13 December 2019 (UTC)
Yes, this is an important thing in both Wikidata and WikiTree - WikiTree is basically an open wiki, so we must have some way to differentiate various kinds of sources, in order to filter trustful information. This includes:
  • information that can not be verified by any published sources (personal knowledge, or GEDCOM files)
  • information only found in primary source (birth/death/marriage/household/census record) - synthesis of such material, though possible in genealogical research, does not meet Wikipedia's standard and may potentially result in conflation of multiple people
  • information found in published, peer-reviewed secondary sources (e.g. scholar journals, encyclopedia, and other reliable sources)
  • information found in published, not completely peer-reviewed secondary sources (e.g. Burke's) - allowed in Wikipedia but not recommended in genealogical research
  • information found in self-published secondary sources (e.g. The Peerage, - This should be replaced with other sources if possible
  • information found in user-generated secondary sources, aka open wiki (e.g. WikiTree,, Wikipedia) - This should be replaced with other sources if possible

--GZWDer (talk) 10:59, 13 December 2019 (UTC)

Feedback of WikiTree community on The Peerage importEdit

While waiting for a clear answer to my previous question about the process who let The Peerage to be imported in Wikidata, I want to make you aware of the reaction of the WikiTree community. There has been work at the interface between Wikidata and WikiTree, and what comes from Wikidata to the dashboard of WikiTree users are suggestions pushed from the comparison of the profiles they manage with their match in Wikidata. This initiative had so far a so-so welcome from the regular WikiTree user, who is more likely to be a baby-boomer grandparent with years of patient work over genealogy, often pre-web and pre-computers, who does not care much, or does not understand, what Wikidata is all about. OTOH, the deontology of WikiTree is to accept genealogical data only if it's backed by primary sources, or reliable secondary sources.

I asked a few days ago on the WikiTree forum whether people feel The Peerage import was good news or bad news. I have the regret to inform you that the overwhelming answer is this is VERY BAD news, because all serious genealogists working on Britain genealogy know that The Peerage is not a reliable source, and since the import of The Peerage their dashboard is overload with suggestions of genealogical junk they had already a hard time getting rid of.


The neat result at the moment is that the efforts made so far to convince WikiTree users of the benefits of Wikidata have been killed in the egg by this affair. You will read the reactions of very serious people who don't want to hear anymore of Wikidata altogether.

I would like this affair to be food for thought. Wikidata makes sense if it's adopted by external users. But users of specific domains have data quality high standards, and won't buy Wikidata if those standards are not met. Bvatant (talk) 23:34, 12 December 2019 (UTC)

Wikidata is intended to be a secondary database, collecting information and knowledge according to another source. Wikidata actually does not guarantee the information is true at all, but suppose information should be verifible. This includes a huge amount of data import from Wikipedia, which is clearly not a reliable source (not better than The Peerage), but it means people may either verify the data in Wikipedia, or if it is unverifible, it should be removed. See also Wikidata:Property proposal/verifiability of property for a recent proposal.--GZWDer (talk) 23:46, 12 December 2019 (UTC)
Maybe it is intended that way, but please read the discussion to see that people at WikiTree don't see it that way at all. And how do you answer their concerns? The answer chosen as "best" in the linked thread has put in bold "What is the point of Wikidata?" Although I've been a so-called "linked data evangelist" for years (now retired), I don't know how to answer properly this concern, and am sad to say I must agree with their current decision to stop having Wikidata suggestions coming to their dashboard, because in the current state of affairs they are counter-productive.Bvatant (talk) 00:16, 13 December 2019 (UTC)
"explains that Wikidata, like Wikipedia, aims not to decide what is true but to share what is verifiable in reliable sources."
what we see here is that the "ecosystem" is getting a lot of objects based on a non reliable sources something WikiTree gets daily. Lesson learned yesterday when I created > 50 000 Wikidata properties is that a lot of items are deleted in Wikidata so maybe this mess will get cleaned (see my list of deleted referenced objects T240738) ... the future will tell...
  • Lesson learned number 2 is that WD objects without description is a killer in the new Wikicommons search try search on John Wadman and you see how bad it is... now people needs to open up Wikidata to understand what John Wadman is - Salgo60 (talk) 06:14, 14 December 2019 (UTC)
See phab:T240265 for such issue.--GZWDer (talk) 06:16, 14 December 2019 (UTC)
Using statistics makes no one happy.... Adding unknown John Wadman twice as 2 WD objects without a description always need an extra click to check what person we speak about. With Wikibase and reusing ontology makes it even more important caring of how data is added... and what data is added... - Salgo60 (talk) 16:49, 23 December 2019 (UTC)

Wikidata calls itself a "knowledge base". But the past is by nature almost unknowable. In genealogy (and other historical areas) there's a big difference between data and knowledge. The day job of historians is to start with inadequate primary data and interpret the hell out of it. They don't often resist the temptation to over-interpret. What they end up with might well be true. It might be the likeliest possible interpretation. But we don't actually know it. When we get our time machines, we won't be too surprised if many things were very different from what it says in the books.

So much of what goes out is supposed, surmised, guessed, conjectured, imagined, made up, corrupted in transmission, or just known to be wrong. Writers easily convince themselves that their guesses must be right. Then another writer turns a guess into a probable, and the next writer gives it as a stone-cold fact. Historians rarely check original sources where they can rely on "authorities", ie. anything that somebody else has published, so original mistakes can circulate for decades before they're caught. Fact-checking is rare even in the best publications. They talk about verifiability in reliable sources. But there's very little in history, and especially genealogy, that comes anywhere close to WP:RS.

This also goes to Notability, because the general idea is that a person is notable if enough effort has been put into investigating that person for publication in reliable sources. If reliable sources don't exist, they can't be notable. With historical data, one entry in one database can't even give us much confidence that the person isn't fictitious. 09:42, 16 December 2019 (UTC)

More than agree - Salgo60 (talk) 10:57, 27 December 2019 (UTC)

Duplicates upon duplicates upon duplicatesEdit

Please take the time to clean up the duplicate mess made by your bot. It's making it impossible to merge any of the items, because if you merge one, you end up having to take hours to merge all the family members too. -Yupik (talk) 01:18, 30 December 2019 (UTC)

Return to the user page of "GZWDer/2019".