Discussion of the mode of communicating edit

@Jura1: I'd like to discuss with you what I find difficult to handle in your line of questioning. First off, the goal here should be to improve the data in Wikipedia so that it is more useful to users of Wikipedia. Those are the terms on which I would like to discuss contributions to Wikipedia. I appreciate that there are conventions and I plan on holding to them, but where there are massive inconsistencies and no references offered for conventions, it's difficult to take them at face value.

As for the particular data, the list of historical information on senators is very incomplete. As I mentioned, I plan on making it complete. I would think that would be embraced as a good thing. I also appreciate that others have worked on this and while new additions are appreciated, disrupting older work is frowned upon. Far from disrupting older work my work will disambiguate statements which previously were hard to disambiguate.

I'm happy to discuss how, but your questions appeared to attempt to bait me into crawling into a corner. If I've misunderstood, hopefully we can clear that up here.

your "I have recently pinged you on that discussion" and "You asked me to open a discussion". edit

Hi, my memory sometimes fails me .. can you help me find what your comments refer to:

  • "You asked me to open a discussion" [1]

and

  • "I have recently pinged you on that discussion." [2]

Can you provide a link to these discussions?

Have you used a different user name discussions about the topic? --- Jura 12:06, 15 August 2020 (UTC)Reply

@Jura1: You can always search through my edits. Here is me adding you to a discussion I attempted to open a while ago Wikidata_talk:WikiProject_every_politician/United_States_of_America. A simple acknowledgement that I am operating in good faith would go a long way. Also, why are we discussing your memory? Can we make this conversation more substantive? -- Gettinwikiwidit (talk) 12:15, 15 August 2020 (UTC)Reply
@Jura1: It's getting late here and I'm getting tired of your attempts to wear me down. I'm happy to continue the discussion if it takes a turn towards something more productive. Otherwise I'll follow up with a complaint since as Oravattas mentioned, this seems like an issue that has been brought to your attention and been disciplined for before. -- Gettinwikiwidit (talk) 12:26, 15 August 2020 (UTC)Reply


You need to sign your comments for pings to work. This can't have worked.
When did you I ask you to open a discussion? As you wrote those lines today, I'd assume you'd recall. Obviously, if you used another username, I couldn't.--- Jura 12:22, 15 August 2020 (UTC)Reply
@Jura1: One last thing to note, that is if you take a step back, the addition of the new Q98077491 actually addresses your concern you had with Teester. You can keep United States senator (Q4416090) and use it as you would prefer. All your queries and bots would continue to work as they did before the per congress information was entered. (Well before I ever knew about Wikidata, FWIW.) So it truly does baffle me why you wouldn't want to discuss the practical aspects of this. Also, I've never made any entries to Wikidata under another name. I'm not sure why you keep bringing this up. You show a pattern of making busy work for other people which serves no purpose. That comes across as harassment. -- Gettinwikiwidit (talk) 12:26, 15 August 2020 (UTC)Reply
@Jura1: Fair point on the ping. If that's all cleared up, can you explain why you asked? -- Gettinwikiwidit (talk) 12:28, 15 August 2020 (UTC)Reply
This comes across as disingenuous. I'm trying to have a conversation and you're trying to dominate the discussion. As I mentioned at the top of this talk, "the goal here should be to improve the data in Wikipedia so that it is more useful to users of Wikipedia". If you would orient your replies around this end, I think this would be much more productive. I do fear though that your immediate goal is not to discuss but to wear me down. If I've misunderstood, I apologize and look forward to more productive conversations. Again, it's late here, so bye for now. -- Gettinwikiwidit (talk) 12:38, 15 August 2020 (UTC)Reply
@Jura1: I'm trying to engage with you. I made it clear I was done for the day and you persist. It should be clear to anyone that this is harassment. Please stop. -- Gettinwikiwidit (talk) 12:46, 15 August 2020 (UTC)Reply
@Jura1: Here is where you asked if I opened a discussion. [4]. Regardless of that, it should be clear that a discussion was opened a while ago. If you'd like to have a practical discussion of the plan forward, you're more than welcome to engage. Otherwise, please desist.

@Jura1: You are willfully misconstruing and misunderstanding the points I've made. It seems designed as a wild goose chase to tire me out. You simply refuse to acknowledge answers to questions you've received. (Including the preceding discussion.) You don't respond to specific questions of what concerns you have. This strikes me as harassment. You've had answers to all your questions and then since you've dragged out the argument so long you claim to have not remembered them. That's on you. You seem to have no real interest in the source material and haven't acknowledged the work I've done and pointed out to you. Why are you even part of discussing material you have no interest in? -- Gettinwikiwidit (talk) 06:38, 19 August 2020 (UTC)Reply

@Jura1: Please try to read more carefully before commenting. I'm not interested in another fruitless spat on a public page. Gettinwikiwidit (talk) 13:24, 29 November 2020 (UTC)Reply

Help:Sources edit

Please note the above about the way to add references to Wikidata items.--- Jura 12:09, 15 August 2020 (UTC)Reply

@Jura1: Noted and responded to. [5]

We sent you an e-mail edit

Hello Gettinwikiwidit,

Really sorry for the inconvenience. This is a gentle note to request that you check your email. We sent you a message titled "The Community Insights survey is coming!". If you have questions, email surveys@wikimedia.org.

You can see my explanation here.

MediaWiki message delivery (talk) 18:46, 25 September 2020 (UTC)Reply

Classes of Senators edit

@Gettinwikiwidit: Does the data you're working from distinguish between the Classes of United States senators? If so this might be a good opportunity to set distinct electoral district (P768) values based on those. If that's not something that's simple to do at this stage, though, don't let this distract you — that's something we can always look at again later. --Oravrattas (talk) 13:29, 2 October 2020 (UTC)Reply

@Oravrattas: It does, but I was debating which property to use to represent it. Any suggestions? I was thinking of adding it later just for not having a clear opinion on this point.
@Gettinwikiwidit: In general, it's a good idea to have separate items for the electoral constituencies than the geographic entities they map onto, even if those have exactly the same boundaries, as there will be different properties applying to each (especially over time) (see discussion at Wikidata talk:WikiProject every politician#Geographic areas as constituencies. So when it came up before about migrating the US Senators to having distinct constituency items for each state, the suggestion was to have two separate constituencies per state: e.g. Delaware 1 and Delaware 2, as each is distinct (of a different "class", and on a different electoral cycle etc). But certainly no problem with switching to this later if it's awkward to include now, or if it would slow down the main process. --Oravrattas (talk) 07:09, 3 October 2020 (UTC)Reply
@Oravrattas: Okay, I think I'm ready to upload this stuff. I wanted to run a test or two first. It looks like when I upload statements with OpenRefine it makes new statements rather than replacing old statements. I have captured the current statement ids with the following
SELECT DISTINCT ?senator ?senatorLabel ?termLabel ?partyLabel ?districtLabel ?posheld WHERE {
  ?senator p:P39 ?posheld;
    wdt:P31 wd:Q5.
  ?posheld ps:P39/wdt:P279* wd:Q4416090;
    pq:P2937 ?term;
    pq:P580 ?assumedOffice.
  OPTIONAL { ?posheld pq:P768 ?district;
           pq:P4100 ?party.}
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
ORDER BY (?senatorLabel)
Try it!

This should allow us to delete these statements with QuickStatements after. ( I haven't tested this yet. ) My plan is to first upload all the new statements, then save off the information from the old statements to be sure nothing of interest is lost and then deleting the old statements. It may take me some time to extract the current statements with all their qualifiers to be safe. How does this sound? I'll wait for confirmation. Gettinwikiwidit (talk) 10:31, 11 October 2020 (UTC)Reply

Please stop upload edit

As requested by the admin reviewing the request for deletion of your malformed items, please don't use them while we are discussing the model to use. --- Jura 09:21, 14 October 2020 (UTC)Reply

@Jura1: I saw no such request. The conversation under John McCain appeared over to me.

Wikidata:Requests_for_deletions/Archive/2020/10/04#Bulk_deletion_request. --- Jura 09:27, 14 October 2020 (UTC)Reply

Please undo your non-consensual edits. --- Jura 09:28, 14 October 2020 (UTC)Reply

Who is the admin? MisterSynergy? He appears to agree with the consensus. Also, have you looked at the changes? They don't use Q98077491 style positions. They use the form you agreed to in the discussion. Gettinwikiwidit (talk) 09:30, 14 October 2020 (UTC)Reply

Please clean up your edits until we agreed on an approach to use. --- Jura 09:35, 14 October 2020 (UTC)Reply

"I don't think that you need to ask for permission if you want to complete the missing claims with the currently used data model (using United States senator (Q4416090) with parliamentary term (P2937) qualifiers)." https://www.wikidata.org/w/index.php?title=Wikidata:Project_chat&diff=1290266977&oldid=1290255661 Gettinwikiwidit (talk) 09:38, 14 October 2020 (UTC)Reply
@MisterSynergy: Please correct me if I've misunderstood.
Rats! I apologize. I meant to do this, but I messed up my schema. I'm happy to change all the statements to United States senator (Q4416090). What's the best way to achieve this? Is it to delete and re-add? I don't think updates are possible with Quick Statements, but I am adept at Python. Gettinwikiwidit (talk) 10:11, 14 October 2020 (UTC)Reply
Actually, it looks like it only updated the first row I had per Senator. I think I understand how I need to fix my OpenRefine project to do the right thing. ( I thought the update would know about records, but it seems it prefers rows. ) I'll row back my update and fix accordingly. Gettinwikiwidit (talk) 11:30, 14 October 2020 (UTC)Reply

Welcome edit

Hello & Welcome! I see you've already had your run-in with jura, which seems to be a traditional rite-of-passage at this point.

Speaking of tradition: another one is to start each post on a user's comment page with praise, to then awkwardly segue into criticism, which is exactly where we are now at this point of this sentence. It is the mildest of possible criticism, though. Almost more of a question.

Which is: I'm wondering if it's better to use your bulk data source as references (https://bioguideretro.congress.gov/Static_Files/data/B/B000444.xml), or the associated html page (https://bioguideretro.congress.gov/Home/MemberDetails?memIndex=B000444). I think it's generally better to optimize for human readers who might just click it, and any non-human readers can fuck right off until they finally subjgate us will have to contend with html anyway, since it constitute the vast majority of linked sources.

Another alternative would be to add further qualifiers. publisher (P123)United States Senate (Q66096) would be an obvious one, and for our purpose external data available at URL (P1325) could be used for the XML source. Wikidata:List_of_properties/Citing_sources ist nice list to peruse.

I said "question" above, and I could indeed see it either way, and there is absolutely no need to change the references you have already added. More an issue I had on my radar, and wish to put on more radars, and maybe at some time I'll find someone with a good argument either way and then we'll make it the law.

Cheers & happy importing, --Matthias Winkelmann (talk) 15:22, 14 October 2020 (UTC)Reply

@Matthias Winkelmann: I'm pretty agnostic as to how best use the reference but am happy to go with your instinct. As to Jura, I think it's to the detriment of Wikidata to let him present himself as the face of the project. Quite frankly, I am discouraged, but I simply refuse to be bullied. Gettinwikiwidit (talk) 18:44, 14 October 2020 (UTC)Reply
@Matthias Winkelmann: I followed your suggestion for the update. Gettinwikiwidit (talk) 22:08, 14 October 2020 (UTC)Reply
Great, thanks! If you're looking for something a bit more fun, I recently came across "The Murder Accountability Project". It's some retired guy with a knack for statistics who has amassed the most comprehensive database of homicides in the US. Not sure where I read about him (The Atlantic maybe?), but he has an algorithm that tries (and succeeds) to connect unsolved cases with each other and identify serial murders. Would probably take a bit of modelling work and a few added properties, but it just wants to be connected to the larger data universe. Just an idea, though. I'm a bit busy and have been meaning to work on it or do a proper write-up to get others interested.
Thanks for the tip! Gettinwikiwidit (talk) 22:23, 15 October 2020 (UTC)Reply
As to Jura, don't despair! I got pretty pissed when he was making some bad faith accusations in public with complete disregard to the explanation I had already given him, But he is also correct sometimes, and has even been seen behaving reasonably, according to rumors.It's volunteer work, and people are probably far more diverse than at a regular office (except for gender balance, obviously. --Matthias Winkelmann (talk) 21:29, 15 October 2020 (UTC)Reply

This relationship is not productive edit

@Jura1: Please simply stop. Let's not have you using language like "malformed" without any reference to a consensus. Also don't claim to "clean up" data without a consensus either. One thing is clear, you've put more effort into trying to take control of the conversation than you have in doing anything with the data here. I will keep my focus on improving the data store. You can expect that I will ignore further comment from you that I don't find useful in that pursuit. Gettinwikiwidit (talk) 13:16, 6 November 2020 (UTC)Reply

@Jura1: You seem to have a fundamental misunderstanding of how modeling works. It's all fictional. The real thing is the real thing. The model needs only to map onto aspects of the real thing in a way that's useful. Gettinwikiwidit (talk) 13:16, 6 November 2020 (UTC)Reply
@Jura1: It's odd because I can see you making useful contributions elsewhere. I'm happy to have a productive conversation, but you seem more intent on preventing any work getting done in my project. Gettinwikiwidit (talk) 09:16, 8 November 2020 (UTC)Reply

Please stop changes to electoral districts edit

Hi Gettinwikiwidit

Please reminded of @MisterSynergy:'s injunction to seek more broad input before making large scale changes to our data.

For such input, please provide a clear sample that ours can review and comment on.

In the meantime, please undo any changes you may have don't against admin advice and without such community input. --- Jura 08:50, 14 November 2020 (UTC)Reply

As you are well aware there was an active discussion on Project chat which included samples. Please cease misrepresenting facts to others. I'm not sure what admin advice you're referring to. Regards, Gettinwikiwidit (talk) 09:55, 14 November 2020 (UTC)Reply
I let MisterSynergy help you with that.
Is there any reason why you don't want to stop the upload and seek wider community input? --- Jura 12:24, 14 November 2020 (UTC)Reply

Import mismatches edit

Spotted one probable mismatch in your import - William Sprague III (Q883991) seems to have data for William Sprague IV (Q437963) (six terms taking place after his death), and vice versa. Should be easy to fix but I thought I'd leave it with you to change over since you probably have the script handy and it'll take you a lot less time than me doing it by hand! Andrew Gray (talk) 13:10, 15 November 2020 (UTC)Reply

Thanks. It looks like I got these exactly backwards somehow. If you follow the references you can see this is the case. I'll flip them. FWIW, the entries I added all have parliamentary term (P2937). The others should probably be cleaned up. Regards,

Gettinwikiwidit (talk) 22:57, 15 November 2020 (UTC)Reply

Fixed. FWIW, I just grabbed the statements with SPARQL like so:
SELECT DISTINCT (wd:Q883991 as ?uncle) (wd:Q437963 as ?newphew) ?stmt ?termLabel WHERE {
  wd:Q437963 p:P39 ?stmt.
  ?stmt ps:P39 wd:Q4416090;
        pq:P2937 ?term.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
ORDER BY ?termLabel
Try it!
and then generated move claim statements with a one-liner: perl -lne 's/http:\S*\///g; print if $.>1' sprague-uncle.tsv | cut -f2,3 | while read uncle statement; do echo $statement $uncle P39; done | perl -lpe 's/-/\$/' | wd mc --batch. The SPARQL obvious generates different results now that the claims have been moved. Thanks again, Gettinwikiwidit (talk) 23:27, 15 November 2020 (UTC)Reply
Some more anomalies - this lot are interesting. It seems that before about 1920, most senators are shown in the source data as serving until end of the Congress, even if they died during it; after that the data mostly standardizes on ending terms at death. There are also a lot where the source data has the end of the term set as a short time after death - anything from a day up to a month or so. My feeling is it would make sense to end all these on the date of death, but I'm not sure exactly what the source is indicating here - any ideas? Andrew Gray (talk) 13:58, 15 November 2020 (UTC)Reply
@Andrew Gray: Thanks. The data still needs some cleaning up. I started out having each senator serving out the full length of the term. Then I adjusted for senators who were appointed mid-term. The source I'm using is in the reference data and it also is not consistent. It's also not consistent about whether appointed senators start as of the date they are appointed or the first date they take a seat. I'm happy to follow any consistent standard. Using date of death should be easy to implement. It likely will mean there are periods that there is no sitting senator, but that's fine. I think there have been actual periods where that's the case. Gettinwikiwidit (talk) 22:54, 15 November 2020 (UTC)Reply
This also suggested another check which turned up some anomalies.
SELECT DISTINCT ?sen ?bioid (SUBSTR(STR(?val), ((STRLEN(STR(?val))) - (STRLEN(?bioid))) + 1 ) as ?urlid) WHERE {
  ?sen p:P39 ?stmt;
    wdt:P1157 ?bioid.
  ?stmt ps:P39 wd:Q4416090;
    pq:P2937 ?term;
    prov:wasDerivedFrom ?ref.
  ?ref pr:P854 ?val.
  FILTER((SUBSTR(STR(?val), ((STRLEN(STR(?val))) - (STRLEN(?bioid))) + 1 )) != ?bioid )
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
Try it!
It's also worth noting that having this data in the store makes it easier to do some of these checks. Gettinwikiwidit (talk) 01:21, 16 November 2020 (UTC)Reply
Definitely agree - getting stuff into a queryable form really helps show up some of the anomalies that might not otherwise be obvious from a plain table of data, and then they're a pretty quick fix (hopefully). You might find some of the stuff at Wikidata:WikiProject British Politicians/backend#Maintenance queries useful, with tweaks (though it's not as well documented as it could be)
Thinking about dates... for the Senate, my understanding is that everyone is usually elected in November but nobody takes their seats until January (or March), and I think you've reflected this with "normal" terms starting and ending on the changeover day in January/March? So, from that, it would seem logical and consistent for mid-term replacements to start on the day that they take their seat, rather than the day of election or the day of the replacement being announced.
For end dates, if terms end at death, you'd probably want resignations to take effect on the date of resignation (however that's defined - there does seem to be an official effective-of date) rather than carrying forward to the date of a replacement being appointed. This will mean the two cases are consistent with each other. There will be gaps, but that seems to make sense because there certainly are points in the real world with only say 99 senators, simply because it takes time to appoint someone. Andrew Gray (talk) 21:00, 16 November 2020 (UTC)Reply
@Andrew Gray: I think we should move this conversation over to Wikidata_talk:WikiProject_every_politician/United_States_of_America in the hopes that it's more inviting for people to participate. We can list what the state of affairs is and what sorts of clean up we think needs to be done. I'll put down my thoughts of where we are there shortly. Gettinwikiwidit (talk) 02:08, 17 November 2020 (UTC)Reply
Sounds great - I'm busy for a couple of days but will get some notes there by the end of the week. Andrew Gray (talk) 22:51, 18 November 2020 (UTC)Reply
@Andrew Gray: FWIW, I put quite a number of my thoughts on the Wikidata_talk:WikiProject_every_politician/United_States_of_America page as well as some observations that might spark an interest in someone should I not get around to investigating further. I've also completed a couple of the tasks I posted and noted as much on the page. Regards, Gettinwikiwidit (talk) 01:48, 27 November 2020 (UTC)Reply
Oh, sorry, I thought I'd posted some followups there! Must be getting forgetful. Will dig out my notes and add. Andrew Gray (talk) 22:52, 27 November 2020 (UTC)Reply
@Andrew Gray: I used your query to delete statements. I'm not sure where you are with the remaining ones, but this query from the examples page still shows up some old style districts. Would you mind letting me know what more there is you'd like to check and when you think we might finish this off? Thanks, Gettinwikiwidit (talk) 11:26, 11 December 2020 (UTC)Reply
P.S. browsing through a few pages, it looks a lot cleaner with the unified model in there. Gettinwikiwidit (talk) 11:26, 11 December 2020 (UTC)Reply
Good question - those two look like they should have been in the deletion batch (they're showing up when I run the original query). Maybe your script skipped them for some reason, and they just need finishing off manually? Andrew Gray (talk) 18:04, 11 December 2020 (UTC)Reply

Making this discussion productive edit

@Tagishsimon: Let's not have this out on a public forum. I'm happy to engage you here where you can make all the accusations you want and I'll answer them. Regards, Gettinwikiwidit (talk) 10:36, 30 December 2020 (UTC)Reply

To kick things off, I'll simply ask why you think claims with no clear reference besides Wikipedia are more authoritative than the USGS. It's not clear to me from your comments that you even considered what source I was referencing. It's also not clear what value you think such unreferenced claims add beyond "it was there" which doesn't show much evidence of having thought about the material at all. My focus will always be on the value that can be taken from this data. If you can explain the value of keeping unsourced information around and even potentially giving it a higher rank than clearly sourced information, I'm happy to engage in that discussion. Regards, Gettinwikiwidit (talk) 11:02, 30 December 2020 (UTC)Reply
I'll go a step further. coordinate location (P625) is used to locate an item on a map. These bodies of water clearly cover an area and so in some sense the choice of coordinate is arbitrary to achieve this goal. One point offers not much value over another provided they both exist within the area being pointed at. Why choose one point over another in this case? For one, it makes it a lot easier to cross-reference against the data if there is a "canonical source" to use for this choice. I think that the USGS provides a clear canonical source for such information about these entities. There is no sense in which Wikipedia referenced items can serve as a canonical source. If you've thought about this more than I have and have something to offer, I'm open to discuss, but I will not have the basis of such a discussion be your assertion that I have given no thought to this whatsoever. In an ideal world, you'd apologize for your false accusations. But this is the internet, so I hold out no such hope and thus will make no such request. Regards, Gettinwikiwidit (talk) 11:02, 30 December 2020 (UTC)Reply
@Tagishsimon: Rereading what you'd written I can appreciate your intent better. I'll admit that our previous interaction where in my view you piled on to Jura's baseless claims tainted my view of what you'd written. I don't have a problem with the suggestion mass uploads don't necessarily improve the data, but you also didn't seem to try to understand my point about having clearly referenced data. We may simply disagree about the value of unreferenced data. I'm not interested in pushing my preferences down other people's throat though as should be evident by the fact that I posted before making the change. In any event, I do apologize for not reading your original comments more carefully. Regards, Gettinwikiwidit (talk) 12:24, 31 December 2020 (UTC)Reply

[WMF Board of Trustees - Call for feedback: Community Board seats] Meetings with the Wikidata community edit

The Wikimedia Foundation Board of Trustees is organizing a call for feedback about community selection processes between February 1 and March 14. While the Wikimedia Foundation and the movement have grown about five times in the past ten years, the Board’s structure and processes have remained basically the same. As the Board is designed today, we have a problem of capacity, performance, and lack of representation of the movement’s diversity. Our current processes to select individual volunteer and affiliate seats have some limitations. Direct elections tend to favor candidates from the leading language communities, regardless of how relevant their skills and experience might be in serving as a Board member, or contributing to the ability of the Board to perform its specific responsibilities. It is also a fact that the current processes have favored volunteers from North America and Western Europe. In the upcoming months, we need to renew three community seats and appoint three more community members in the new seats. This call for feedback is to see what processes can we all collaboratively design to promote and choose candidates that represent our movement and are prepared with the experience, skills, and insight to perform as trustees?

In this regard, two rounds of feedback meetings are being hosted to collect feedback from the Wikidata community. Two rounds are being hosted with the same agenda, to accomodate people from various time zones across the globe. We will be discussing ideas proposed by the Board and the community to address the above mentioned problems. Please sign-up according to whatever is most comfortable to you. You are welcome to participate in both as well!

Also, please share this with other volunteers who might be interested in this. Let me know if you have any questions. KCVelaga (WMF), 14:32, 21 February 2021 (UTC)Reply

OpenRefine and spatial indexing edit

Hi,

I saw your very interesting post on Andrew Gray's talk page about flexible OR reconciliation run against a local CSV extract from wikidata.

Somebody was asking this morning on the #wikimaps Telegram channel as to whether there were any good ways to do spatial-based reconciliation -- ie reconciliation based on coordinate nearness first to get a list of candidate matches, before string similarity.

I was wondering,

i) do you know if this is possible in Open Refine, and if so whether there are established best ways to go about it ?
ii) is spatial indexing something that could be built into your offline reconciliation service ?

eg in Perl I can use https://metacpan.org/pod/Algorithm::SpatialIndex to easily get back the 10 nearest co-ordinate matches out of a reference set, for each data-row I want to match to wikidata. I can then score or filter these candidates by string similarity, or just look at the set of 10 if there is no single clear match.

Similarly in Python it looks like https://geopandas.org/docs/reference/sindex.html based on an underlying RTree library is available.

Would this be functionality that could be built into CSV reconcile? It would be brilliant to be able to match on geo-proximity, and wikidata class, before string similarity. Jheald (talk) 12:11, 14 April 2021 (UTC)Reply

Hey @Jheald:, currently csv-reconcile only matches against a single column but the scoring of the fuzzy match is handled by a plugin and there is a plugin which uses distance for scoring. There are plans to add optional checking of properties per the reconciliation spec, but the hope is to keep the core service as flexible as possible and have the complexity handled through configuration and plugins.
As far as if it's possible currently in OpenRefine, this is more a question about the Wikidata reconciliation service itself than OpenRefine. There is an issue open for that service. I personally think there's a lot more potential for fine-grained reconciliation using personalized services rather than adding lots of configuration to a large service. If you have questions about csv-reconcile, shoot a question over on the project page. Regards, Gettinwikiwidit (talk) 23:44, 14 April 2021 (UTC)Reply
Thanks. That's interesting. The key point (I think) though here, is IMO this is not fundamentally an issue about scoring. That's too late. Rather, it's about how to get back the initial set of candidate matches for a target, that then get fed to the scoring module for evaluation.
And it seems to me that csv-reconcile is probably the ideal controlled small environment to experiment with approaches for this, that may actually be the most useful place for it; but would also establish proof-of-concept, and clarify the exact nature of requirements and a potential design, to request for larger systems.
To illustrate the kind of thing I routinely now do in Perl, but others would like to be able to do without scripting, eg in Open Refine:
  • the size of the 'haystack' might be between 30,000 items (Uk settlements) and 500,000 (UK places with a heritage designation)
  • the number of 'needles' might be 10,000 or several tens of thousands.
The scale of the reconciliation means it makes sense to run against a local extract (which can be much faster), rather than the (rather slow) full wikidata reconciliation service.
  • It's hopeless to expect to run a scoring algorithm over all 10,000 * 500,000 potential matches.
  • A strings-first approach, using string similarity to identify the subset of candidates to then score, works poorly and misses too many potential matches, because the strings used on both sides can show too much variation.
  • What works is a coordinates-first approach, using coordinate nearness to identify eg the nearest 20 candidates from the 'haystack' for each needle, and then scoring those 20 candidates.
I think this kind of approach should be possible within the OpenRefine client-server model (even if I don't know enough about the reconciliation protocol to know exactly what messages one would want to send). But one requirement for it would seem clear, that the reconciliation service would need a spatial index of the haystack, to be able to return the n nearby initial candidates to then score.
So that's partly why I'm reaching out to you, who has actually built a local OR reconciliation service. To implement the above, what would you see as the 'asks' that a client would send to a reconciliation service, that would be the requirements that the reconciliation service would need to be able to fulfil ? Jheald (talk) 10:44, 15 April 2021 (UTC)Reply
@Jheald: It sounds like you're asking what the reconciliation API is. You can find that here. FWIW, when I speak of scoring I mean simply the method of doing the fuzzy match. Since there is primarily a single comparison taking place, there is no question of what you do either before or after getting these results. You apply it to what you want when you want, before doing a string filter or after doing a string filter. This is a single tool and not a silver bullet. Also, I want to reiterate that you'll always have more flexibility to tailor a SPARQL query than you will to tweak knobs on a given reconciliation service. Gettinwikiwidit (talk) 22:37, 15 April 2021 (UTC)Reply

Jupyter notebooks 101 at the Wikimedia hackathon edit

Hi, I wonder if you would be interested in thinking together how to create a shared Jupyter notebook for doing reconciliation? I am not a coder but I have tried very many strategies and tools for doing reconciliation. I am wishful that Jupyter notebooks could be used to leverage the capacity of many Python tools created for different purposes also for non-coders. We initiated a workshop on learning the basics of notebooks, and I would myself like to apply the takeaways to reconciliation. Susanna Ånäs (Susannaanas) (talk) 07:20, 13 May 2021 (UTC)Reply

@Susannaanas: Hi there! I don't have much experience with Jupyter notebooks, but I'm happy to brainstorm ideas for how reconciliation could be incorporated into some workflow. I have briefly looked at notebooks, but didn't have a project to keep my attention at the time. FWIW, I'm based in Tokyo. How do you suggest we proceed? Regards, Gettinwikiwidit (talk) 23:11, 13 May 2021 (UTC)Reply
Great! I made a ticket around the reconciliation Jupyter initiative in the hackathon board in Phabricator. That might be a good place to start. We have also started discussing in the https://t.me/wmhack Telegram group. Susanna Ånäs (Susannaanas) (talk) 14:32, 14 May 2021 (UTC)Reply
@Susannaanas: Thanks. I'll try to catch up. I've joined the Telegram group as Doug. Gettinwikiwidit (talk) 21:21, 14 May 2021 (UTC)Reply

Call for participation in the interview study with Wikidata editors edit

Dear Gettinwikiwidit,

I hope you are doing good,

I am Kholoud, a researcher at the King’s College London, and I work on a project as part of my PhD research that develops a personalized recommendation system to suggest Wikidata items for the editors based on their interests and preferences. I am collaborating on this project with Elena Simperl and Miaojing Shi. I would love to talk with you to know about your current ways to choose the items you work on in Wikidata and understand the factors that might influence such a decision. Your cooperation will give us valuable insights into building a recommender system that can help improve your editing experience. Participation is completely voluntary. You have the option to withdraw at any time. Your data will be processed under the terms of UK data protection law (including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018). The information and data that you provide will remain confidential; it will only be stored on the password-protected computer of the researchers. We will use the results anonymized (?) to provide insights into the practices of the editors in item selection processes for editing and publish the results of the study to a research venue. If you decide to take part, we will ask you to sign a consent form, and you will be given a copy of this consent form to keep. If you’re interested in participating and have 15-20 minutes to chat (I promise to keep the time!), please either contact me on kholoudsaa@gmail.com or use this https://docs.google.com/forms/d/e/1FAIpQLSdmmFHaiB20nK14wrQJgfrA18PtmdagyeRib3xGtvzkdn3Lgw/viewform?usp=sf_link with your choice of the times that work for you. I’ll follow up with you to figure out what method is the best way for us to connect. Please contact me using the email mentioned above if you have any questions or require more information about this project.

Thank you for reading this information sheet and for considering taking part in this research.

Regards Kholoud