Please clean up html escaped stuff in labelsEdit

https://www.wikidata.org/w/index.php?title=Q58310658&action=history this has a label full of html escaped stuff. see also https://www.wikidata.org/w/index.php?search=quot&search=quot --So9q (talk) 07:22, 3 March 2021 (UTC)

@So9q:Thank you for bringing it to my attention. As I presume you’ve noticed, too often I imports don’t work well with special characters. I do my best to correct when I see it. I should do better. Trilotat (talk) 14:04, 3 March 2021 (UTC)
@So9q: can you tell me which is the Wikidata standard for quotation mark to replace the html found in that second link (search results) that you shared, " or “ ? I make no promises to make a dent in that list, but I’ll tend to it when I can. Trilotat (talk) 14:13, 3 March 2021 (UTC)
This looks like a bot job to me to be honest. It would be nice to know if the obviously flawed tool has been fixed. Can you investigate? --So9q (talk) 15:34, 3 March 2021 (UTC)
I'm clueless about such things. I'm happy to clean up but even describing the problem beyond the obvious is more than I can do. I've seen discussions about it in various projects, so I know it's a known issue. Trilotat (talk) 15:46, 3 March 2021 (UTC)

Call for participation in the interview study with Wikidata editorsEdit

Dear Trilotat,

I hope you are doing good,

I am Kholoud, a researcher at King’s College London, and I work on a project as part of my PhD research that develops a personalized recommendation system to suggest Wikidata items for the editors based on their interests and preferences. I am collaborating on this project with Elena Simperl and Miaojing Shi.

I would love to talk with you to know about your current ways to choose the items you work on in Wikidata and understand the factors that might influence such a decision. Your cooperation will give us valuable insights into building a recommender system that can help improve your editing experience.

Participation is completely voluntary. You have the option to withdraw at any time. Your data will be processed under the terms of UK data protection law (including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018). The information and data that you provide will remain confidential; it will only be stored on the password-protected computer of the researchers. We will use the results anonymized to provide insights into the practices of the editors in item selection processes for editing and publish the results of the study to a research venue. If you decide to take part, we will ask you to sign a consent form, and you will be given a copy of this consent form to keep.

If you’re interested in participating and have 15-20 minutes to chat (I promise to keep the time!), please either contact me at kholoudsaa@gmail.com or kholoud.alghamdi@kcl.ac.uk or use this form https://docs.google.com/forms/d/e/1FAIpQLSdmmFHaiB20nK14wrQJgfrA18PtmdagyeRib3xGtvzkdn3Lgw/viewform?usp=sf_link with your choice of the times that work for you.

I’ll follow up with you to figure out what method is the best way for us to connect.

Please contact me if you have any questions or require more information about this project.

Thank you for considering taking part in this research.

Regards

Kholoudsaa (talk)

Descriptions for new itemsEdit

Hi there! I noticed you have been creating a number of items for journal articles. If you would, please remember to add descriptions for these, so they may be more easily distinguished from other, non-article items at a glance. I use something like "scholarly article published in MMM YYYY", but even just "scholarly article" or "scientific article" is better than nothing. Let me know if there are any questions, and thank you! Huntster (t @ c) 02:21, 14 July 2021 (UTC)

DOI to ADS BibcodeEdit

Hi there! Thanks for your work on ADS bibcode (P819) statements, I've noticed these links popping up in my watchlist a few times and I really appreciate it. I did a load of work on the RIMFAX (Q42317746)       project a while ago and it's great to see items around it being improved.

It's got me wondering, do you have an easy way to go from a DOI to an ADS Bibcode? I'm working on the Younger Dryas impact hypothesis (Q1092095)       at the moment (WikiProject) and would love to get Bibcodes for every paper that has one (currently only 26 out of 187). Here's a query for the items I'm looking at — https://w.wiki/4T9h

Any help would be much appreciated! Aluxosm (talk) 17:43, 25 November 2021 (UTC)

@Aluxosm: Finally... something I've done with ADS Bibcode has garnered some positive interest (and not from a bot)! Thank you for the positive reinforcement. Funny that you should write today. I've been for months and months at this really with a few edits at at time, but yesterday I finally came up with a way to use vlookup to connect the bibcodes I have with the wikidata items without bibcodes. What I've done:
  • I pull a list of bibcodes for a particular journal title, recently for the various Journals of Geophysical Research, which includes the DOI.
  • I also pull a Wikidata Query Service query for that journal showing ADS Bibcode and DOI.
  • Combining these two lists, I use vlookup in excel to match the wikidata item and its DOI with the ADS pull and its DOI.
  • I flip that into a quickstatement to add the bibcodes to those wikidata items that were missing bibcode.
  • As the brits might say... "and Bob's your uncle." Why do they say that?
Sorry for a complicated explanation for something that after months of going about this painfully, makes it now really easy. Sadly, I have to apply this by publication, rather than a list of works cited or some other random list. Wishing you a pleasant day. In USA, we're celebrating Thanksgiving, which means a day off and too much eating and too much family. Cheers, Trilotat (talk) 18:10, 25 November 2021 (UTC)
I really hope this doesn't make you feel like you've wasted your time, but I've just coded up a DOI to Bibcode converter 😬, even have an import of all of the matches for that query ready to go. It's a bit rough around the edges at the moment but it works great, it just uses the API and a bit of Python goodness. I've gotta run now but I'll share it in a couple of hours if that's okay. Thanks for the inspiration and Happy Thanksgiving! Aluxosm (talk) 21:50, 25 November 2021 (UTC)
haha... no worries. I waste more time before 9:00 a.m. than most people waste all day! Coding up a DOI to Bibcode converter? Wow. I wish I had those skills. Cheers and enjoy your day. Trilotat (talk) 22:10, 25 November 2021 (UTC)
😂 thanks for going through them all the same! Here it is, warts and all:
# Copied straight from a Wikidata query
dois = """10.1371/JOURNAL.PONE.0155470
10.1360/N972019-00339
10.13140/RG.2.2.32278.70728
10.1166/JAMR.2011.1071""".split('\n')

# Copied straight from a Wikidata query - matches the order of the DOI's above
qids = """wd:Q34533443
wd:Q107249376
wd:Q108708743
wd:Q106848977""".split('\n')

##############################################################################

import requests
import urllib.parse

# ADD YOUR API KEY HERE
ADS_API_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

# Remove the 'wd:' from the QID
clean_qids = [qid[3:] for qid in qids]

# Store the results as a list of tuples
results = []

# For every DOI, try and find a match
for index, doi in enumerate(dois):
    
    # Encode the parameter string
    doi_str = 'doi:{}'.format(doi)
    payload_str = urllib.parse.urlencode({'q': doi_str, 'fl': 'bibcode'}, safe=':')

    # Make the request
    r = requests.get('https://api.adsabs.harvard.edu/v1/search/query',
                     headers={'Authorization': 'Bearer:{}'.format(ADS_API_KEY)},
                     params=payload_str)
    
    # Only save the bibcode if it was the only match
    response = r.json()['response']
    bibcode = response['docs'][0]['bibcode'] if response['numFound'] == 1 else None
    
    # Add the result if the bicode is not None
    if bibcode is not None:
        print('Found a match for: {}'.format(doi))
        results.append((clean_qids[index], doi, bibcode))
    else:
        print("Counldn't find a match for: {}".format(doi))

print()
print()

##############################################################################

# Print as CSV for copying into OpenRefine
for result in results:
    print('"{0[0]}","{0[1]}","{0[2]}"'.format(result))}}
I'll clean it up and make it way easier to use over the next couple days 👌 I'd be more than happy to walk you though it as well if you're interested. Thanks again! Aluxosm (talk) 22:54, 25 November 2021 (UTC)

Not the good authorsEdit

Hi,

I've found many items about scholarly articles where the authors aren't cited in them. As an example, this article don't cite M. Cappi, but the Wikidata item you've created gived that name. In fact, we can see a lot of examples here : https://author-disambiguator.toolforge.org/batches_oauth.php?id=4e83550b . Some of them are from Daniel Mietchen (Q108871505).
Any idea of the problem ? Simon Villeneuve (talk) 15:45, 10 May 2022 (UTC)

Greetings. I cannot confirm, but maybe the extra authors are part of the organizations listed in the arxiv preprint? See https://arxiv.org/pdf/1001.2450.pdf Trilotat (talk) 00:42, 11 May 2022 (UTC)

Q57305547Edit

Hi,

Thanks for the revert on Georges Aad (Q57305547) but I'm not fully sure to understand as you just remove the alias and didn't undo the merge. Was it just the first version of this item https://www.wikidata.org/w/index.php?title=Q64863277&oldid=970625526 that was plain wrong, was it a bad idea for me to merge it? I would like to be sure that everything is good now with this item.

Cheers, VIGNERON en résidence (talk) 11:14, 16 May 2022 (UTC)

You were were right to merge them, but there was an erroneous label (name) brought over. The Semantic name was bad. My apologies for my confusing edit. Trilotat (talk) 11:48, 16 May 2022 (UTC)
@VIGNERON en résidence: If you follow the the Semantic author link, it's someone else. That identifier is often a problem (not a mistake you made, but the data is a problem.) I should reverted the merge and then repaired the source first (removing the bad alias) and then merged again. Can you revert that merge? I am sorry for my partial revert. Trilotat (talk) 12:08, 16 May 2022 (UTC)
Ok, got it, Semantic was the problem, that's what I thought. Not sure we need to unmerge-remerge for such a small detail (and I won't have time but feel free to do it if you really think it's necessary). Cheers, VIGNERON en résidence (talk) 12:49, 16 May 2022 (UTC)