User talk:ArthurPSmith/Archive/6

Latest comment: 3 years ago by Carlobia in topic Author disambiguator

Author disambiguator edit

Author disambiguator is the best thing since sliced bread. Thanks soooo much for this tool. - PKM (talk) 20:05, 5 December 2018 (UTC)Reply

  • Great tool. Interestingly, @Alexmar983: asked me for something like it just the other day.
    BTW, if one types a full name (first middle last name), fuzzy search seems to find people without the middle name, but not those where the middle name is limited to its initial. Maybe these should also be found when starting from first+last name.
    Maybe the tool could also check if VIAF is present (and suggested its addition). If you just check for a single one, that might be the most useful one. There are obviously a few other (non-library ones) likely the be found on such author items (notably Scopus, Researchgate, even Linkedout).--- Jura 05:49, 7 December 2018 (UTC)Reply
I follow the needs of the users in real time, so I am often looking for what is under development, I am not surprised ;) Thanks for pinging me--Alexmar983 (talk) 06:06, 7 December 2018 (UTC)Reply
The "fuzzy search" logic definitely needs a bit of work. I'm currently trying to improve the clustering, which doesn't really do what you would expect. Definitely good suggestions on looking at other author ID's besides ORCID! ArthurPSmith (talk) 14:12, 7 December 2018 (UTC)Reply
I noticed, but I wasn't sure what to suggest. In one case, a possibility to sort by journal would have been handy. For another, "check all" was sufficient. --- Jura 08:43, 8 December 2018 (UTC)Reply
@PKM, Jura1, Alexmar983: I thought you might want to know - the service has been considerably updated in a number of different ways: (1) Author name searching is I think much better (though it is now case-sensitive as it is using SPARQL literals), (2) Article clustering should be much more sensible, (3) I added a VIAF search/input form. (4) I've limited the number of articles shown to prevent some out-of-memory and related problems, though there are still some issues with that needing further improvement. And there have been a number of other updates and fixes, so it should be even easier to use... ArthurPSmith (talk) 15:35, 8 January 2019 (UTC)Reply
Thank you ArthurPSmith.--Alexmar983 (talk) 15:55, 8 January 2019 (UTC)Reply
Excellent! Thanks for the update. - PKM (talk) 23:29, 8 January 2019 (UTC)Reply
@ArthurPSmith: Great tool, indeed! I would like to refer to it in a paper. Besides the url, is any paper available to cite? Thank you in advance! --Carlobia (talk) 16:40, 7 January 2021 (UTC)Reply
@Carlobia: Nothing in "paper" form, but I have given a few presentations on it, I don't know if they would count as suitable references? Most recently at the 2020 Wikicite virtual conference - https://meta.wikimedia.org/wiki/WikiCite/2020_Virtual_conference ArthurPSmith (talk) 18:16, 7 January 2021 (UTC)Reply
@ArthurPSmith: Thank you! It will fit perfectly! --Carlobia (talk) 08:16, 8 January 2021 (UTC)Reply

FYI edit

https://phabricator.wikimedia.org/T187611

50.254.21.213 00:30, 15 December 2018 (UTC)Reply

India edit

Could you please look into India (L40021)? It's in English but I think users somehow misunderstood purposes of the lexeme structure. Sense has pronuncations, translations are written as representations of forms, etc. KaMan (talk) 09:48, 15 December 2018 (UTC)Reply

Yeah, looks like some people got carried away there. I added the standard form and moved the pronunciations there. Not sure what you mean about translations - there aren't any listed right now? Or you mean the sense glosses? I think that's ok. ArthurPSmith (talk) 13:15, 15 December 2018 (UTC)Reply
I mean in form L40021-F1 there are four representations with language "en" (India's), "te" (భారత దేశం యొక్క), "ml" (ഇന്ത്യയുടെ), "bn" (ভারতের). To me they look more like translations but they can be transcriptions as well, I do not know. KaMan (talk) 13:53, 15 December 2018 (UTC)Reply

Closing RFC edit

Hi Arthur, since you were not involved in the discussion and hoping that you don't have further remarks, could you please close this RFC? The consensus seems to be there, so once it is closed I can add the information to Wikidata:Property creators (or you can do it yourself if you wish so).--Micru (talk) 23:34, 17 December 2018 (UTC)Reply

@Micru:   Done - however in addition to edits to Wikidata:Property creators it wasn't clear to me what the plan was for Proposal 3, hopefully you can sort out what needs to happen there? ArthurPSmith (talk) 16:01, 18 December 2018 (UTC)Reply
Thanks a lot for the closing and for the really nice summary. I will look into it asap.--Micru (talk) 16:02, 18 December 2018 (UTC)Reply

Lemma in English edit

Should (L40585) be uppercased in first letter or is it just duplication of lemma (L14835)? KaMan (talk) 06:49, 29 December 2018 (UTC)Reply

Looks like an anonymous user was experimenting. I merged them. ArthurPSmith (talk) 15:01, 29 December 2018 (UTC)Reply

Mongols and com edit

invalid ID (L41371) and invalid ID (L41449) - looking at lexical category I'm not sure if this lexemes should be deleted or corrected. It's English so I will leave it up to You. KaMan (talk) 08:13, 17 January 2019 (UTC)Reply

Thanks, I've requested their deletion. I wonder if we should have a special Lexeme deletion requests page? ArthurPSmith (talk) 15:09, 17 January 2019 (UTC)Reply
I request deletion of about one "lexeme" per day (today two) and they are usually deleted very fast. I think separate page could be problematic for administrators (yet another page to observe) so I would stay with current global page. KaMan (talk) 15:36, 17 January 2019 (UTC)Reply

Re: Which Federica Fabbri? edit

Hi Arthur, i think it is the same person but I do not have the full certainty, so you can delete if you think it appropriate. Thanks for the tip, Alessandra Boccone (talk) 11:24, 22 January 2019 (UTC)Reply

Colors as subclass of entity? edit

Back in May 2018, you changed the subclass of various colors from "color" to "entity" (e.g. in this edit). This seems wrong to me, but I thought I'd ask you about it before reverting. Can you explain further? JesseW (talk) 05:04, 6 February 2019 (UTC)Reply

@JesseW: "red" is a color. "color" is not a color. Their ontological status is quite different, so "subclass" makes no sense; the relation "instance of" (P31) was there all along and is correct. dark red (Q5223370) subclass of (P279) red (Q3142) is fine - the more narrowly defined color is subsumed within the broader one. There's no such parent relation available for the primary colors. ArthurPSmith (talk) 14:54, 6 February 2019 (UTC)Reply
Excellent, thank you! I'll copy this on to some of the relevant talk pages, so other people wondering about it can find it more easily. JesseW (talk) 03:30, 7 February 2019 (UTC)Reply
...This seems wrong. All dark reds are reds, all reds are colors, all colors being X would correctly imply that all reds are X. That seems to match the subclass of (P279) relation? It might be that color (Q1075) is associated to a particular sense of "color" which doesn't match this use, but another item with the same label might? I'm not quite sure how best to handle this.
In any case, setting "red" to be a direct subclass of "entity" is certainly not the best answer. --Yair rand (talk) 07:06, 28 February 2019 (UTC)Reply
@Yair rand: No - "the rose is red" is a very different statement from "the rose is color". ArthurPSmith (talk) 15:08, 28 February 2019 (UTC)Reply
That would be the case for any adjective. That sentence uses the adjective sense (the rose isn't actually noun-sense red), which is presumably not the topic of the item. To the best of my knowledge, there are no items with an adjective sense as the topic. I don't even know how that would be possible to work for pretty much anything. --Yair rand (talk) 17:42, 28 February 2019 (UTC)Reply
@Yair rand: I suppose that's a fair point. Nevertheless, when you say "dark reds are reds" and "all reds are colors", the "are" in those two sentences has different meanings - in Wikidata terms the first is P279, the second is P31. If there was some item that could be considered a superclass of "red" in the same sense as the "dark red : red" relationship, it would need to be something like "red in a broader sense" - "red plus infrared" perhaps, or "red and purple". "Color" doesn't make sense to me at all in that role. ArthurPSmith (talk) 18:26, 28 February 2019 (UTC)Reply
@Yair rand: I've been thinking it about it some more - I think the main issue is that with the color hierarchy we are using P279 (subclass) as a proxy for a more specific property like "within the color space of". So in reality I think NONE of the colors should be considered classes at all (what are their instances anyway?) - rather they should be treated just as we do with locations - as a possibly overlapping hierarchy of entities with their own parent/child relation. And all instances of "color". What do you think of this approach - i.e. should we propose a new property for this? ArthurPSmith (talk) 15:52, 1 March 2019 (UTC)Reply

Devyn Grillo edit

invalid ID (L42364) - Is this some kind of proper name in English or candidate for deletion? KaMan (talk) 08:41, 12 February 2019 (UTC)Reply

Deletion - seems to be the users' own name. ArthurPSmith (talk) 14:20, 12 February 2019 (UTC)Reply
What about invalid ID (L42322) (look at lexical category) KaMan (talk) 13:06, 13 February 2019 (UTC)Reply
Not a word - thanks! How are you noticing these? ArthurPSmith (talk) 13:27, 13 February 2019 (UTC)Reply
Every new day I read all new lexemes since last day. It's not that much. KaMan (talk) 15:29, 13 February 2019 (UTC)Reply
Another Englih word (L42840) KaMan (talk) 07:53, 18 February 2019 (UTC)Reply
@KaMan: Merged! ArthurPSmith (talk) 15:00, 18 February 2019 (UTC)Reply

New parameter proposal to property P5892 edit

Hello, @ArthurPSmith:, I hope you are well! Can I ask for your guidance in where is the best place to propose an alteration on the property UOL Eleições ID (P5892)? I'm finally creating the items of politicians, so the items of the elections can be created and this identifier can be used. To do that, I think an update at the property is needed (I explain here why). To who or where do I have to submmit this request? Thank you in advance, Ederporto (talk) 06:58, 15 February 2019 (UTC)Reply

I commented on the property talk page - maybe next Tuesday (Feb 19) will work to make the change? ArthurPSmith (talk) 19:47, 15 February 2019 (UTC)Reply

P6516 and externalid resolver edit

Hi, It seems that P6516 formatter URL needs your externalid resolver, as seen at Diaspidiotus juglansregiae (Q10470807). I tried several things at Aonidiella citrina (Q10414113) as well. Can you adjust the resolver and the formatter URL to make it function? Thanks in advance. Lymantria (talk) 22:10, 19 February 2019 (UTC)Reply

@Lymantria: For URL-encoding issues it doesn't actually need any special coding, you can just drop it in. I edited the formatter URL on P6516 to use it, and it seems to work (see the two examples with spaces). You can either stick with the '%20' or use ' ' as the separator here, it seems to work either way. ArthurPSmith (talk) 19:10, 20 February 2019 (UTC)Reply
Thank you. That's weird, I am shown a 404 error all the time. The '%20' is translated apparently into '%2520' by the software, and the scalenet-website doesn't accept the ' ' seperator either when I try it. Neither in firefox, nor in chrome. Lymantria (talk) 22:01, 20 February 2019 (UTC)Reply
Ah, I see. That must be a caching problem. Thanks again. Lymantria (talk) 22:05, 20 February 2019 (UTC)Reply
Oh yes, you have to edit the identifier for it to be recalculated I think. ArthurPSmith (talk) 22:37, 20 February 2019 (UTC)Reply
It works fine. Thanks much. Lymantria (talk) 11:56, 21 February 2019 (UTC)Reply

subclass of (P279) edit

Hi Arthur,

Thank you for pointing this out. At first, I had a single one item with two values for ISBN-13 (P212) , but I got a warning. I do not remember what it was saying but I thought it was a way to bypass the problem… Thank you. Genium (talk) 17:47, 13 March 2019 (UTC)Reply

Sure to support a hoax? edit

https://www.wikidata.org/w/index.php?title=Wikidata:Property_proposal/Baltisches_Biographisches_Lexikon_Digital_ID_(new_scheme)&diff=907437273&oldid=907382942

https://bbld.de/info/id: "Die Seite https://www.wikidata.org/wiki/Wikidata:Property_proposal/Baltisches_Biographisches_Lexikon_Digital_ID_(new_scheme) basiert auf einem Hoax von "MisterSynergy"". Good luck! 78.54.8.90 22:52, 8 April 2019 (UTC)Reply

splitting up external ID based on regex edit

There is no split of

  1. GND ID into
    1. with "-"
    2. without "-"
  2. VIAF ID into
    1. [1-9]\d(\d{0,7}})
    2. [1-9]\d(\d{17,20})

etc. Why then would one split out BBLD IDs that match /[0-9]{16}/? 78.55.46.198 23:10, 8 April 2019 (UTC)Reply

It was requested, no serious objections. I'm sure there's a history I'm unaware of but I don't see how it's relevant. ArthurPSmith (talk) 17:17, 9 April 2019 (UTC)Reply
And no serious support. And no seriuos evidence for existence. It was created as part of a campaign by Jura1 and MisterSynergy, look at https://bbld.de/info/id - there are different sources for creating a BBLD ID, but the notion that some belong to "former scheme" and others to "new scheme" is not supported at all. Is Wikidata going to create a new property for each BBLD ID creation mechanism? Or even better, a pair for each mechanism to have former IDs and new IDs (how long is an ID new?).... wait, maybe three, to have former-current-new separated into different properties? Could all that violate en:WP:OR? 78.54.5.147 23:35, 10 April 2019 (UTC)Reply

Is SourceMD really working? edit

I am not sure if it was on and then off. Using SourceMD last night, I loaded a list of DOIs and the items are fine (correction... I actually loaded these items a couple weeks ago). I tried loading a few individual DOIs today and SourceMD says the batches were successful, though I cannot find the items (or the DOI). Strange. As example, Batch 6878. Trilotat (talk) 14:30, 15 April 2019 (UTC)Reply

It doesn't seem to be listing anything on that batch, not sure what that means? I haven't tried it myself, it just looked like the change Magnus made would definitely fix the problem we were running into. ArthurPSmith (talk) 15:29, 15 April 2019 (UTC)Reply

Christian hymns / canticles edit

Hi there! Re: your revert: https://www.wikidata.org/w/index.php?title=Q856713&oldid=prev&diff=920031489 I'm in the middle of trying to clean up the cluster of hymns, psalms, canticles, national anthems etc. and it will be a little messy for awhile while I move things around, I hope you can bear with me. It's a proper mess at the moment, thoroughly mixed together as the Scandinavian word "salme" is extensively used both for Christian songs and Christian poems, and not just for psalms, (and never for sports!) while the Spanish/Portuguese name most of their local, national and sports-anthem "himno/hino" gettings them mixed in with the religious. The Germans have a whole bunch of strict, narrow definitions of course, and the English borrow freely from all the above. So there you have it, hope it doesn't disturb things too much, it shouldn't take too long to fix. Moebeus (talk) 16:17, 20 April 2019 (UTC)Reply

Ok - I just noticed your post on Project Chat about it. I ran into it because you'd created a subclass loop which is a no-no and gets caught in one of our Listeria reports... ArthurPSmith (talk) 16:24, 20 April 2019 (UTC)Reply

WikidataCon submission on Author Disambiguator? edit

Hi Arthur, are you planning on (i) attending and (ii) such a submission? I will likely not be able to attend in person, but would be interested in helping with something on the topic, especially the part of integration with Scholia or Listeria to round up curation workflows. --Daniel Mietchen (talk) 13:12, 24 April 2019 (UTC)Reply

@Daniel Mietchen: Yes, I submitted a proposal already for a 25-minute presentation - your input on it would be great, thanks! ArthurPSmith (talk) 18:21, 24 April 2019 (UTC)Reply
Sounds good — count me in when preparation time comes. --Daniel Mietchen (talk) 03:01, 25 April 2019 (UTC)Reply

Query service lag edit

Could you refrain from editing large items for a while? We're experiencing some lag on the query service atm... Sjoerd de Bruin (talk) 14:35, 24 April 2019 (UTC)Reply

Due to the "stop batch" feature not working, I've blocked your account for the query service to recover. Sjoerd de Bruin (talk) 15:01, 24 April 2019 (UTC)Reply
Hi @Sjoerddebruin: - sorry I was traveling. Hmm, I've been working on large items for the last several days, I didn't realize it could contribute to wdqs lag. Is there some background info on why this happens/how to avoid? ArthurPSmith (talk) 18:23, 24 April 2019 (UTC)Reply
I have no idea what caused todays issues, still investigating. Sjoerd de Bruin (talk) 18:25, 24 April 2019 (UTC)Reply
Anyway, thanks for the Grafana pointer, I'm going to run a small collection of updates now and see how it affects things. ArthurPSmith (talk) 18:26, 24 April 2019 (UTC)Reply
Hmm, it does look like a couple of the wdqs servers gain a few minutes lag when I start one of those jobs, and it goes away when I stop it. I'm not sure the pattern's entirely consistent though. I've just restarted the one that was stopped earlier today, which is longer than the others I had prepared; hopefully just running that one will not cause too severe a problem. I'll check in again later today. ArthurPSmith (talk) 20:29, 24 April 2019 (UTC)Reply
@Sjoerddebruin: is there a phab task or other activity I could look at on this? I had 3 batch jobs updating large items running most of last night, and Grafana indicated there was no problem until about 11:00 GMT this morning (jobs had been running since about 01:00 GMT); I checked around 13:30 GMT and noticed the lags were still high on two of the servers, so I stopped the batch jobs. One of the servers seems to have recovered although not immediately, but the other (wdqs1005) still has over a 40 minute lag several hours later. So there's definitely something else going on that's making these lags so bad. ArthurPSmith (talk) 16:02, 26 April 2019 (UTC)Reply
Sorry, we don't have a task for the current issues yet but I do see a pattern between edits done to large items and the query service lag. The volume of your edits in the last 6 hours was 3.2 GB, which all needs to be processed (the query service currently reloads whole items on updates, work is needed on that). Sjoerd de Bruin (talk) 07:25, 27 April 2019 (UTC)Reply
I can confirm that the problem seems to come from batch edits to large items - stopping Daniel Mietchen's jobs impacting large items had a pretty clear effect two days ago. According to Wikiscan you are the only one running batches affecting large items at the moment, so I would expect the lag to reduce if you stop these. − Pintoch (talk) 10:34, 27 April 2019 (UTC)Reply
@Pintoch, Sjoerddebruin: I've stopped the large-edit jobs for now, will watch the lag to see if it's safe to restart. These same jobs were running for about 10 hours earlier yesterday with no bad lag though. From the 24-hour "wikiscan" there were some other people with multi-GB updates in the past day. Any idea why only 2 of the wdqs servers seem to be affected? ArthurPSmith (talk) 13:04, 27 April 2019 (UTC)Reply
It's been 2 hours, and there's no noticeable improvement in the lag. I really don't see a correlation with the edits I've been doing at all. ArthurPSmith (talk) 15:21, 27 April 2019 (UTC)Reply
6 hours now. I'm restarting the jobs, there was no discernible effect of my turning them off. My edit rate is really slow, I have a hard time believing I'm causing the problem here. ArthurPSmith (talk) 19:23, 27 April 2019 (UTC)Reply
And now, with those batches running for the last few hours, query lag has dropped almost to zero for all the servers. My jobs at least seem pretty clearly to be not making things worse. ArthurPSmith (talk) 00:38, 28 April 2019 (UTC)Reply
By the way, I suspect the "wikiscan" is seriously overestimating the impact of the jobs I'm running - they make generally 4 of 5 edits to the same item one after the other, so the actual volume that has to be moved should be at most 1/4 of what's stated, assuming it's counting the size of the item for each edit, and wdqs doesn't copy the data 4 or 5 times when that's not needed. ArthurPSmith (talk) 00:40, 28 April 2019 (UTC)Reply
Thanks for experimenting! wdqs doesn't copy the data 4 or 5 times when that's not needed I don't think that is true - my understanding is that it does copy the data 4 of 5 times in these cases (we had a discussion with Stas on IRC about that a few days ago and he confirmed that). − Pintoch (talk) 08:24, 28 April 2019 (UTC)Reply
The lag is now more than two hours again. Like said above, the edit rate isn't the problem but the affected items. Yes, there are a few more with such high edit volume but those edit a lot more items. At some periods of the day there isn't much other activity, thus the query service can handle it. But when others are also running batches it's a problem. Please, for our (data) users: postpone for the time being. Sjoerd de Bruin (talk) 14:43, 29 April 2019 (UTC)Reply
@Sjoerddebruin: I turned it off, the lag continued to climb. It's clearly NOT me that's the problem here. ArthurPSmith (talk) 17:34, 29 April 2019 (UTC)Reply
Note that when you're disabling a mass-editing bot, the lag won't go down immediately. The service has still to go through the accumulated backlog of edits, and the lag starts to go down only when the sync point gets past the point where the bot has been turned off. If you have 100 edits/s for the last hour and the Updater can only do 50 edits/s, then it still takes it 2 hrs to go through that hour of updates, even if the editing is turned off now, because updater is not in the 'now' yet. Which means the lag will be raising. Smalyshev (WMF) (talk) 19:19, 29 April 2019 (UTC)Reply
@Smalyshev (WMF): But I'm NOT doing "50 edits/s". I'm doing about 1 edit per 10 seconds at most. And as noted above (and has happened today) the lag continued to rise for HOURS after I shut down the job. It really can't be my jobs that are the problem here. ArthurPSmith (talk) 20:21, 29 April 2019 (UTC)Reply

──────────────────────────────────────────────────────────────────────────────────────────────────── I really would like to understand the underlying problem here - Smalyshev (WMF) is there documentation of the different servers shown on this Grafana chart? Why are wdqs1004 and wdqs1005 (and sometimes wdqs1006) always the ones with long lags, while the wdqs2001,2,3 are usually fine? Is a few GB of data over 6 hours really overwhelming to the network or something? ArthurPSmith (talk) 20:31, 29 April 2019 (UTC)Reply

1004,5,6 are public eqiad cluster servers, 2001/2/3 are public codfw cluster servers. Equiad cluster usually gets the most traffic. The problem is not network data size but the number of updates to Wikidata. Updater has to process all of them, plus process all the query load (and, judging from the number of bans, people still keep ignoring throttling system and try to force through as many as possible). If there are too many updates or too many queries, the servers get slow, which is reflected as lag.
The cluster setup is described here: https://wikitech.wikimedia.org/wiki/Wikidata_query_service#Hardware
Smalyshev (WMF) (talk) 20:36, 29 April 2019 (UTC)Reply
@Smalyshev (WMF): Thanks, that's very helpful! I didn't know about eqiad/codfw before, and the impact of load balancing explains the discrepancy... I assume all updates have to go to all servers, it's just queries that are load-balanced - so the underlying cause of the lag for the last month or so (given the codfw servers have been fine) has to be high query volume, not a problem with updates (though of course with no updates there would be no lag issue!) However, the peak query time from this chart seems to be daily around noon, while the comparable lag chart seems to usually peak around 22:00 (and is not consistently happening every day). So that doesn't entirely explain things either... Anyway, I guess I'll try to avoid running batches between noon and midnight GMT and see if that helps at all. ArthurPSmith (talk) 21:13, 29 April 2019 (UTC)Reply
Yes, all servers do the updates, but the query load is different. However, it is a threshold problem - if number of updates incoming is less than number of updates server can process, it is fine, regardless of how large the difference it. Once the sum of load + update frequency goes over server capacity, the server starts lagging. While throttling/banning and query expirations can mitigate to some level the load issue, the server still has to process all the updates, so heavy update load can cause lags too. It is the sum of both factors. The servers right now can deal with usual query load + update load, but spikes in either - or both - if they large enough, can be problematic. Smalyshev (WMF) (talk) 05:24, 30 April 2019 (UTC)Reply
Wikidata's edit rate is pretty steady according to this chart - though there was a significant dip on April 25 that does coincide with a good day for lag - but other than that one day the whole chart doesn't seem clearly correlated with either edit rate or query rate or the combination, and there's mysterious time shifts like from noon (peak query and close to peak edits most days) to 22:00 UDT (peak lag). Anyway, I'll stick with avoiding the 12:00 - 24:00 times for batch edits for now. ArthurPSmith (talk) 11:35, 30 April 2019 (UTC)Reply

What is the plan? edit

We know about growth in Wikidata, we know that future ambitions will not cease to be as ambitious as they are and were. As we are unable to service our current ambitions, what is the plan for the future. What growth is planned for and what are the contingency plans. As I said earlier, Wikidata is not a relational database, what we experience is the consequence of the absence of relational mechanisms. There is a science to this, what are the plans for the future. How are we going to cope.. PS throw some iron at the problem. Thanks, GerardM (talk) 05:59, 2 May 2019 (UTC)Reply

@GerardM: See this Phabricator ticket which collects a series of requests for more hardware for WDQS as you suggest. It's not a simple problem - scalability in the long run means having to abandon the "vertical" model (the entire graph on one server) and splitting it up among multiple servers, which is a complex technical problem, and may require changing the underlying graph query software (currently Blazegraph). Meanwhile we need to work within the constraints we have right now. ArthurPSmith (talk) 11:38, 2 May 2019 (UTC)Reply
It does not provide me with the answer I am looking for. It is technical, what I am looking is the scenarios considered in growth, not technology that is to follow. Thanks, GerardM (talk) 15:06, 2 May 2019 (UTC)Reply
The issues are technical. If Wikidata is growing faster than the capacity of individual units of computer hardware, then we have to spread the pieces of Wikidata across multiple individual units, which requires significant development. If computer hardware capabilities are growing faster than Wikidata is, then we can just upgrade the hardware and be happy. It looks like we're under the first scenario, not the second. ArthurPSmith (talk) 15:10, 2 May 2019 (UTC)Reply
Technical approaches get you a hack that "makes it work" for now. I am not interested in that, I am interested to learn if exponential growth is expected, planned for and that we are considering "next generation" approaches that enable growth like 1000% in a year (when it is the growth that is considered plausible). Thanks, GerardM (talk) 05:58, 3 May 2019 (UTC)Reply
The technical requirements - and the money to pay for them - are the limiting factor in any growth plan. Read the phabricator tickets I referenced, and you'll see wikidata developers are asking for more capacity, and getting some pushback. Maybe you can spearhead an effort to give the developers more resources? ArthurPSmith (talk) 11:07, 3 May 2019 (UTC)Reply

We had a good week, but today is bad edit

@Sjoerddebruin, Pintoch, Smalyshev (WMF): Grafana is showing the worst lag since last Monday today - and steadily going up. I stopped all my large-item jobs earlier today, however, this SourceMD batch from GerardM editing large items has been running for over 6 days now. I don't think there's any way to pause it that would allow it to restart? Magnus?? Is there any way to tell what else is happening this morning (or is it just Monday morning heavy query volume?) that may be causing trouble? ArthurPSmith (talk) 12:30, 6 May 2019 (UTC)Reply

I don't think there is a way for admins to stop an individual batch, let alone enabling later resumption. Blocking the user is the only thing I can help with, I am afraid. − Pintoch (talk) 12:39, 6 May 2019 (UTC)Reply
Well, it looks like things are recovering. Maybe it's just around noon UTC Monday's will always be bad? ArthurPSmith (talk) 14:21, 6 May 2019 (UTC)Reply
Jobs from SourceMD can be stopped using the UI. They can be restarted at a later date. I have no problem when need be jobs are halted in this way. I have been at work all day. I have stopped the job for now. Given that the job has run for a couple of days, long periods where everything was smooth, you cannot say that it is this job on its own that is the problem. So what happened at noon that gave us such issues ? Thanks, GerardM (talk) 16:20, 6 May 2019 (UTC)Reply
I assume it's heavy query volume - I don't know if it's a small number of specific users, or a more general problem of many people hitting WDQS at the same time. Stas mentioned that the problem seems to happen when query + update volume together go over some threshold, updates don't generally seem to cause trouble on their own. ArthurPSmith (talk) 20:08, 6 May 2019 (UTC)Reply
And today, the following Monday, looks even worse - and all the large-item batch jobs were stopped over 2 hours ago. ArthurPSmith (talk) 12:16, 13 May 2019 (UTC)Reply

click edit

Hi ArthurPSmith, thanks for setting Wikidata:Property_proposal/music_video to ready. Would you click "create"? I can then do the other steps. --- Jura 17:56, 30 April 2019 (UTC)Reply

Go ahead, it's now music video (P6718). ArthurPSmith (talk) 20:40, 30 April 2019 (UTC)Reply
@Jura1: Ok! ISO speed (P6789) and f-number (P6790) ArthurPSmith (talk) 18:58, 28 May 2019 (UTC)Reply

Use of ISSN for DOI identifiers edit

Hi Arthur. I think we can use DOI identifiers for journals as well. There is a recommendation here. At least Wiley uses it widely. That's why I included it in Q6295227 and other items. Best regards. --Gerwoman (talk) 19:02, 7 May 2019 (UTC)Reply

@Gerwoman: Hmm, ok, but in this case it looks like Journal of Forecasting (Q29011411) was created earlier (based on that DOI)? Perhaps the instance of (P31) there needs to be fixed and the two items merged? ArthurPSmith (talk) 20:28, 7 May 2019 (UTC)Reply
Yes. Now merged. --Gerwoman (talk) 16:18, 8 May 2019 (UTC)Reply
@Gerwoman: Ok, thanks! There may have been some others of yours that I removed DOI's from for the same reason - I'll be more careful checking for that sort of problem in future! This was based on looking at constraint violations on the DOI property. ArthurPSmith (talk) 17:44, 8 May 2019 (UTC)Reply

Aren't chemical elements substances? edit

Hello Arthur, you reverted my attemt to make 'chemical element' a subclass of 'pure substance'. I'm new to wikidata and want to understand. I hope this is the correct way to contact you. You stated: "Chemical element" is not (only) a kind of substance" That may be right, but I didn't want it to be a substance only. I wanted it to be a substance too. I have got the intuition that chemicals like sodium or oxigen somehow should be chemical substances and not only abstract classes. Don't you agree? All ontologies I know, classify substances like this:

matter
 mixture
  homogeneous mixture
  heterogeneous mixture
 pure substance
  compounds
  elements

See

https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/01._Introduction%3A_Matter_and_Measurement/1.2%3A_Classification_of_Matter https://www.slideshare.net/ewalenta/ch-2-classification-of-matter-ppt https://eschool.iaspaper.net/classifications-of-matter/the-classification-of-matter/

Why shouldn't wikidata do so?  – The preceding unsigned comment was added by Micgra (talk • contribs) at 17:08, May 15, 2019‎ (UTC).

  • @Micgra: Wikidata's upper-level ontology is a bit of a mess; however, please don't change anything in it without discussion with members of the associated wikiproject - in this case Wikidata:WikiProject Chemistry. On this specific question, Wikidata already has the entry simple substance (Q2512777) which would take the spot you have for "elements" in the suggested classification above, and note that the two entries (chemical element (Q11344) and simple substance (Q2512777)) are linked via a "different from" relation here, which indicates we have considered the relation and for the purposes of Wikidata they are distinct. In particular, "chemical element" here represents both substances and individual atoms whether they are in a pure substance or combined with other elements to form molecules or compound substances or mixtures etc. It is an overarching class - actually a metaclass, whose instances are the individual types of atoms that nature gives us. So yes, they are quite distinct in meaning here. ArthurPSmith (talk) 17:41, 15 May 2019 (UTC)Reply

New duplicate DOIs edit

Hi, I just found a couple of new duplicate items with DOIs containing < (see Q63976771 and Q64357784). Both were created during the last two weeks by SourceMD. It looks like the problem is encoding in the DOIs but I don't know why they are encoded. The DOI for each article appears to be correct in Crossref - could SourceMD be encoding the < and >? Simon Cobb (User:Sic19 ; talk page) 00:19, 6 June 2019 (UTC)Reply

@Sic19: It sounds like that's what's happening; however it's possible SourceMD is getting the DOI's from somewhere else (ORCID, PMC?) where the real problem is. I have been working on cleaning these up after the fact so it's not a huge problem, but it's still annoying... ArthurPSmith (talk) 15:16, 6 June 2019 (UTC)Reply

How's ORES working out for you? edit

Hi ArthurPSmith, I'm working with User:EpochFail (@halfak on irc) on a research study to look into how mw:ORES is working out on wikis where it has been enabled. I was hoping to talk a little about what the kind of work you do on Wikidata and about how the ORES edit filters and classifiers have been working out. Do you use any tools other than Special:RecentChanges or Special:Watchlist that take advantage of ORES? Do you know of any other tools that are used to patrol that do not use ORES? I'm also interested in any other observations you may have about how the ORES scores are working out. Thank you! Groceryheist (talk) 23:52, 12 June 2019 (UTC)Reply

@Groceryheist: You should probably go visit Wikidata:WikiProject Counter-Vandalism, which lists some tools for counter-vandalism on Wikidata, and people who are heavily involved in it. I've used the Open-ended Recent Changes tool [ORC] a bit. I don't pay a lot of attention to the ORES data; it doesn't seem very well calibrated for Wikidata, which has very different sorts of edits from the language wikipedias. A lot of edits flagged by ORES here are just fine, they were flagged just because an anonymous user did a bunch of work to fix up an item. On the other hand, the volume of edits here that need patrolling is pretty overwhelming so we seem to miss a lot. It would definitely be helpful to have better tools for that. The multilinguality here makes it hard though. ArthurPSmith (talk) 13:09, 13 June 2019 (UTC)Reply
Hi ArthurPSmith, thanks so much for getting back to me so soon! Your comments about anonymous users are particularly helpful. Do you have any other thoughts about how ORES treats anons? It's also interesting that you say ORES doesn't seem well calibrated for Wikidata. Also thank you for pointing me to the counter vandalism project and to ORC! Finally, can you think of anyone else who might want to chat a little bit with me about quality control and ORES on Wikidata? Groceryheist (talk) 20:34, 13 June 2019 (UTC)Reply
@Groceryheist: I think anons on wikidata probably should be treated pretty much the same as any user requiring patrolling (less than 50 edits?); I'm not sure if ORES does something different. A lot of the wikidata edits that ORES seems to flag but which I think are fine are creation of descriptions for items in a new language; ideally ORES would run the description through some kind of translation software to see if the words match to some degree the existing descriptions in other languages. Of course we should flag vulgarities in any language; but it seems to flag a lot of perfectly innocent translation work. For example. On who to talk to - User:YMS I think is particularly knowledgeable about Wikidata vandalism. You might also want to talk to some of the admins who have to deal with vandals. ArthurPSmith (talk) 13:41, 14 June 2019 (UTC)Reply
@ArthurPSmith: The feedback about the difficulty with translations is interesting and I'll pass this on the User:EpochFail. I'll also reach out to User:YMS as you suggest. Thanks for your help! Groceryheist (talk) 20:08, 16 June 2019 (UTC)Reply
@Groceryheist, EpochFail: This discussion on Project Chat mentions several tools used for Countervandalism and some other people involved in it here, so you might want to get hold of that group also. ArthurPSmith (talk) 14:30, 17 June 2019 (UTC)Reply
@ArthurPSmith:, awesome! Thank you! Groceryheist (talk) 18:27, 9 July 2019 (UTC)Reply

Property creation edit

Hi there!

Could you please create Wikidata:Property proposal/Réunionnais du monde ID?

Cheers, Nomen ad hoc (talk) 12:30, 15 June 2019 (UTC).Reply

@Nomen ad hoc: - it takes me about 10 minutes to create a new property (unless I'm just asked to "click the button" which takes about 1 minute, if you are willing to fill in all the detailed attributes on the property after creation that's a big help). Given there are 70+ properties waiting to be created, that's a lot of work to get them all done... when I get a free 10 minutes I'll take a look at yours next though. ArthurPSmith (talk) 14:22, 17 June 2019 (UTC)Reply
Ah, apologies, I didn't know that it take such a time! Best regards, Nomen ad hoc (talk) 14:35, 17 June 2019 (UTC).Reply

Property creation edit

I would like to suggest you to use a script to create automatically properties when the proposals have reached maturity. The script also reports some common issues back to the proposer. This page explains how to improve the property proposals. Regards, ZI Jony (Talk) 02:17, 30 June 2019 (UTC)Reply

Just gonna say that if this one fails I'm throwing the towel and someone else will have to be the one proposing it again. Part of the failure rests entirely on me butting head with user:pigsonthewing (not even on Wikidata!) in-between the two attempts I've made at it. I am 100% convinced it is the only reason his support switched to the mos misleading oppose I've seen in a long time. He has a storied past for grudges.

Either way, I do hope someone bringing it up has more luck persuading the people at property proposals than I did. Circeus (talk) 19:41, 4 July 2019 (UTC)Reply

@Circeus: I wasn't aware of a previous attempt. I don't think it was linked from the current proposal? In any case, I don't understand exactly where you stand on this - would you support it with URL datatype? Given the centrality of properties to data modeling in Wikidata we do need to strive for consensus on how they are to be, and that means proposers need to be engaged in the discussion and try to be as clear as possible. ArthurPSmith (talk) 22:30, 4 July 2019 (UTC)Reply
The first proposal is briefly mentioned (though not linked) in my answer to Andy. It's here, if you're curious (and should you desire to be really nosy, this is the incident I'm talking about). It was actually more focused (aside from the name) but apparently, that made it even less attractive to the reviewers.
I'm not sure a url version (aside from issues connected to the template moving for whatever reason) would allow the backlink from the template to the work. This is ultimately what the property is intended to provide (and is needed for eventually building automatic lists of work with templates, or nomenclatural acts, as mentioned in the proposal), but doing the link in the other direction ("work that this template generates a reference for" template -> work instead of the proposed "reference template for this work" work -> template) would definitely not be acceptable to the Wikidata users. You gotta pick your fights and all that. At this point, this is just not a fight I want to bother with anymore (especially if Andy, who has no actual interest in Wikispecies's content quality, is going to get in the way out of spite). Circeus (talk) 00:02, 5 July 2019 (UTC)Reply
"Part of the failure rests entirely on me butting head with user:pigsonthewing" False. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:12, 5 July 2019 (UTC)Reply
"Andy, who has no actual interest in Wikispecies's content quality" Also false. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:13, 5 July 2019 (UTC)Reply

Ready to click edit

Hi ArthurPSmith,

As you consider this and that ready, could you click "create"? Similarly another one. I can then complete the properties. --- Jura 13:20, 19 July 2019 (UTC)Reply

@Jura1: - ok unabbreviated text (P7008), extracted from (P7009) and imprimatur granted by (P7010). ArthurPSmith (talk) 13:26, 19 July 2019 (UTC)Reply

Community Insights Survey edit

RMaung (WMF) 17:37, 10 September 2019 (UTC)Reply

Reminder: Community Insights Survey edit

RMaung (WMF) 19:53, 20 September 2019 (UTC)Reply

Ships edit

Hi Arthur, Can you give me any advice on the follow through of this? Starting to think its dying a death. Maybe I have the wrong end of the stick even? Regards Broichmore (talk) 15:20, 21 September 2019 (UTC)Reply

If I understand correctly, you probably need to make some proposals at Wikidata:Property proposal. You might also want to consider starting a Wikiproject (see Wikidata:WikiProjects) around Ships, as there seem to be several interested parties. ArthurPSmith (talk) 19:34, 21 September 2019 (UTC)Reply
@Broichmore: if you aren't following this page! ArthurPSmith (talk) 19:35, 21 September 2019 (UTC)Reply
Thanks Arthur. Appreciate it. Sadly I don't know if there is enough interest for a WikiProject. There is a well established page on Wikipedia, but i've struggled in vain for synergy with Commons. Hence my plea on Wikidata. Broichmore (talk) 12:00, 22 September 2019 (UTC)Reply

Lexeme aaron edit

Hi there! Re: https://www.wikidata.org/wiki/Lexeme:L74566 I noticed you linked the male given name sense to the hebrew script אהרון, rather than the English Aaron. I don't pretend to know how Lexemes work but I'm curious if this was on purpose or perhaps an oversight? Moebeus (talk) 23:34, 4 October 2019 (UTC)Reply

@Moebeus: That was a suggestion from the MatchSinn tool. If those two "given names" are really distinct concepts, then I think according to our data model it is appropriate to link them as different senses of the word "Aaron". Or maybe the two "give name" entries should be merged? I'm not terribly familiar with our name model actually, so not sure the best approach. ArthurPSmith (talk) 11:53, 7 October 2019 (UTC)Reply
They should definitely not be merged, as the model we use (as championed by Project Names) stresses that names in different scripts and/or with different spellings should be kept apart as separate name items. One reason for this being that the English Romanization might differ from the German or French Romanization, as an example. But anyways, I was just curious and I don't really have an opinion on how the Lexemes should be structured. Thx for the answer! Moebeus (talk) 12:10, 7 October 2019 (UTC)Reply

New page for catalogues edit

Hi, I created a new page for collecting sites that could be added to Mix'n'match and I plan to expand it with the ones that already have scrapers by category. Feel free to use, expand. Best, Adam Harangozó (talk) 19:55, 19 October 2019 (UTC)Reply

@Adam Harangozó: Thanks. I added a line for "UNESCO Nomenclature" - we should probably propose a property for this too. Did I add it in the area you would have expected? ArthurPSmith (talk) 14:00, 28 October 2019 (UTC)Reply
Thanks! Yes but feel free to decide on the categories as you will! --Adam Harangozó (talk) 16:11, 28 October 2019 (UTC)Reply

Getting data from a property with OpenRefine edit

Hello! I have a OpenRefine related question and maybe you can help me with this. I have a list of items and I can get easily a column with their INE municipality code (P772). Is it possible to get the Wikidata item that has this property in a new column? Thanks! -Theklan (talk) 19:00, 27 October 2019 (UTC)Reply

@Theklan: (just replying because I happen to be around)
You can do as follows:
  • Create a new column, the values of the cells in that column should be some random garbage such as "2ebb3698dfff32bc6"
  • Reconcile this column to Wikidata, by enabling the column which contains your INE municipality code (P772) values and matching it to… INE municipality code (P772)
Values which have a corresponding identifier will be matched. The purpose of using random garbage as cell values in the column to reconcile is to make sure they are non-empty (otherwise they would be skipped by reconciliation) and their content does not correspond to any item on Wikidata (otherwise the reconciliation could find items based on their label - we only want it to find items based on their INE municipality code (P772) value). − Pintoch (talk) 05:47, 28 October 2019 (UTC)Reply
@Pintoch: Thanks! -Theklan (talk) 08:41, 28 October 2019 (UTC)Reply
@Theklan, Pintoch: - uh, thanks for sorting it out guys! I wouldn't have thought of the random text trick (I would have advised just reconciling some other sensible column like "name"). ArthurPSmith (talk) 14:01, 28 October 2019 (UTC)Reply
The problem is I didn't have name. So the random text trick seems the best option (by the way, I downloaded the file as a spreadsheet, made another page there with all the localities and INE codes and used the VLOOKUP function to find the correspondences. -Theklan (talk) 14:22, 28 October 2019 (UTC)Reply

Author disambiguation with EditGroups edit

Hi Arthur,

It was great to catch up at WikidataCon! I have added support for your Author Disambiguator tool in EditGroups. For the edits that should be tracked as batches on EditGroups, you can use edit summaries of the following form:

my very informative edit summary ([[:toollabs:editgroups/b/AD/89ead4fe|details]])

where 89ead4fe is a randomly generated hexadecimal string which identifies the batch (and "AD" stands for author disambiguator). This hash is generated by your tool: no interaction with EditGroups is required on your side. All edits with a summary which matches this pattern will be attributed to your tool. See Wikidata:Edit_groups/Adding_a_tool for more details if needed. − Pintoch (talk) 05:40, 28 October 2019 (UTC)Reply

Thanks! I just added a github issue to remind myself to get this done! ArthurPSmith (talk) 13:58, 28 October 2019 (UTC)Reply
@Pintoch: I've been testing this out, and it doesn't seem to be working yet? Is there more of a delay than is stated? See for example this group, which includes this edit... the page just says "Edit group "5db8c3db1b8e7" not found" ??? ArthurPSmith (talk) 23:01, 29 October 2019 (UTC)Reply
Whoops, sorry that was a mixup on my side. This is now resolved: https://tools.wmflabs.org/editgroups/?tool=AD. A few thoughts:
  • Wouldn't it be nice to link the Qid in the summary? Author Disambiguator change author for [[Q33698593]] ([[:toollabs:editgroups/b/AD/5db8d7bc810d7|details]]) instead of Author Disambiguator change author for Q33698593 ([[:toollabs:editgroups/b/AD/5db8d7bc810d7|details]])
  • Your hashes do not seem random, is that intentional? (Is there no collision risk?)
Pintoch (talk) 08:36, 30 October 2019 (UTC)Reply
Thanks, yes it does work now! And oops on the hashes - they're not working quite as I intended (seems to have a different hash for each edit for one thing, so not grouping at all!) - I have been using php's uniqid() function to generate the hash but that's probably not the right thing to use. Time to revise a bit... ArthurPSmith (talk) 12:50, 30 October 2019 (UTC)Reply
Ok, also it is worth noting that the first summary of the batch is used as summary for the entire batch. So it might not be worth to include the Qid of the item being worked on in the summary (given that it is already clear from the context in most UIs), but rather the item of the authors involved, perhaps? − Pintoch (talk) 14:42, 30 October 2019 (UTC)Reply
Just to follow up - above issues should be addressed now - see for example this page. ArthurPSmith (talk) 20:07, 30 October 2019 (UTC)Reply

about the new features of Author Disambiguator edit

Hi,
Thank you for this tool. I use it since a couple of months now and it's great !
I think it's a good idea to quit QuickStatements, having now your own tag in the summary edit and doing the job in only one edit. But there's some issues with these new features (at least, for me).
So, first, when I'm connecting to my Wikimedia user account and I click on "Link selected works to author", it open and run a blank new tab in my browser. The tool is working (the editions are done on the concerned items), but I cannot see the progression of it and after several minutes, the "about:blank" tab usually finish in a 504 Gateway Timeout error. I think this is also slowing significantly my browser (and|or) my computer. I'm running Firefox 70.0 on Window 10.
Second, I think that the English label and item number of the concerned author should appear in the edit summary. As an example, something like "Author Disambiguator set author for Q59275792" isn't usefull because it is obvious that it is Q59275792, the edited item, who is concerned. I think it should be something like "Author Disambiguator set author Daniel Muenstermann (Q64856332)"
I hope these comments will be usefull and thanks again for this tool ! Simon Villeneuve (talk) 13:57, 30 October 2019 (UTC)Reply

@Simon Villeneuve: Hi! Yes, the progression problem is a real issue; I'm working on a couple of fronts to address it, but it's a bit complicated... It shouldn't be slowing your computer much though, unless you are editing papers with a very large number of authors where there's a lot of data to display, is that the case? There's almost no javascript involved, all the real work of matching and editing is being done on the server side (php). On the edit summary - good idea, I'll add a github issue to work on that. ArthurPSmith (talk) 14:07, 30 October 2019 (UTC)Reply
Hi,
Yes, it's the case (these dam particles physicists and their articles with > 2 000 authors). Simon Villeneuve (talk) 14:19, 30 October 2019 (UTC)Reply
@Simon Villeneuve: By the way the change to the edit summary is installed live. Also a link to "Edit Groups" (see above discussion with Pintoch). ArthurPSmith (talk) 20:08, 30 October 2019 (UTC)Reply
@Simon Villeneuve: There's now a new feature that moves the name to author transformations off to a background process, which you can follow in the browser via the new "Batches" page. However, it doesn't work quite as it did in testing, I still have a bit of tweaking to do. Ping me if you are running into any troubles with it. ArthurPSmith (talk) 21:30, 12 November 2019 (UTC)Reply
Hi,
Yes, I saw it. This is good !
My only problem for now is when I finish a batch of 500, I recharge the Author Disambiguator page with the same author name to do the rest of the items, but the page didn't update correctly, even if I purge it. It take some time before the page actualise. By example, I finished my batch of Veronique Boisvert (Q67482673) hours ago and the Author Disambiguator page still give 159 publications found. But these pages already have been done (some 10 hours ago). Simon Villeneuve (talk) 21:38, 12 November 2019 (UTC)Reply
This is unfortunately a problem with WDQS - it used to be current within a minute or so most of the time, but lately it is often several hours behind Wikidata updates. ArthurPSmith (talk) 21:44, 12 November 2019 (UTC)Reply

Conservatorio Luca Marenzio (Q30263550) edit

Good evening ArthurPSmith,

I am contacting you about the page in the title (merely for informational purpose); actually, I do removed that items because I found the existence of multiple references/IDS (some were out of dated or incorrect) related to the same and sole element, so I do proposed to unify & maintain only one reference/ID. Here was my request: link. Thank you for the attention and your work/contribution to the Wiki's projects.

Best regards --BOSS.mattia (talk) 18:43, 27 November 2019 (UTC)Reply

@BOSS.mattia: If you find something out of date or incorrect, this is a wiki, you are free to edit it and make changes. It's not helpful to create new items that duplicate existing ones! Also if something becomes out of date that doesn't mean it should be deleted - it was true at one point in time, so if necessary you can attach the dates (and any source references) to the information. ArthurPSmith (talk) 19:11, 27 November 2019 (UTC)Reply
Actually, the situation was: there were multiple elements already existing before my intervention, I chose one and I do uptaded it and then I emptied the other pages/elements in orther to have only one right element/ID and in order to move forward with the work & save time for other users/contributors. Thank you for your kind reply. Yours faithfully, --BOSS.mattia (talk) 19:22, 27 November 2019 (UTC)Reply
Return to the user page of "ArthurPSmith/Archive/6".