Wikidata:Project chat

(Redirected from Wikidata:Project Chat)
Latest comment: 2 hours ago by Matthiasb in topic Mass-import policy

Wikidata weekly summary #641

edit
Thanks for providing this :) So9q (talk) 05:27, 1 September 2024 (UTC)Reply

terrorist (Q12414919) being a subclass of political activist (Q11499147)

edit

This seems like a bit of a stretch or is that just me? Trade (talk) 16:42, 27 August 2024 (UTC)Reply

Doesn't seem far-fetched to me. Do terrorists not seek political or social change? One man's freedom fighter is another man's terrorist. -Animalparty (talk) 01:41, 28 August 2024 (UTC)Reply
Googles Dictonary suggests that acting the in the pursuit of political aims is part of the definition of what makes someone a terrorist. ChristianKl15:29, 2 September 2024 (UTC)Reply

Property:P1813 short name with countries

edit

The usage of this property differs from language to language. Looking at USA and UK, some write their form of "USA", while others write "United States" (hence UK and United Kingdom). I'm looking for a more or less reliable field to retrieve a short name (not an abbreviation!) and I'm asking myself if this would be the one, I could use for that. UK I would rather expect at international license plate code or something. I changed it for English and German in the UK and the US, but now I start to worry, that this might cause problems elsewhere. I would also like to change the value at Holy Roman Empire to Holy Roman Empire instead of HRE. Any advice on the topic? Flominator (talk) 05:10, 29 August 2024 (UTC)Reply

Also states, like Kentucky, use it as KY, where I would have expected just "Kentucky" as opposed to the official name "State of Kentucky". Of course, I could also use the label, but that would be another API call. The claims I already have at hand at that point. --Flominator (talk) 05:35, 29 August 2024 (UTC)Reply

Looking at the examples of this property, both abbreviation and "short text but not abbreviated" look to be both accepted https://www.wikidata.org/wiki/Property:P1813#P1855 Bouzinac💬✒️💛 08:05, 29 August 2024 (UTC)Reply
Its aliases "acronym" and "initialism" do make it ambiguous. Splitting off an "acronym" property might be best. Mind you, that wouldn't help the OP who naturally refers to USA and US in their post, as we all do, UK/GB are a muddle, and you'd code UK as both a short form and an acronym, and no-one has attempted to unravel Foreign, Commonwealth and Development Office (Q358834) !! Vicarage (talk) 08:22, 29 August 2024 (UTC)Reply
Actually, it would help him a lot, because he could go then and set "United States" and "United Kingdom" as value for this property without getting his butt kicked (or at least to defend himself with a line of argumentation, in case it happens anyway). Flominator (talk) 09:01, 29 August 2024 (UTC)Reply
Unfortunately, it looks like your proposal has been tried already in January of this year: Wikidata:Property proposal/abbreviation --Flominator (talk) 09:13, 29 August 2024 (UTC)Reply
That proposal was very confused. What we'd want is 'initialism' ('acronym' is a a pronounceable word), but as Michael Caine would say, not a lot of people know that. But its not something that impacts me. Vicarage (talk) 16:52, 29 August 2024 (UTC)Reply
Thanks. Let's hope Wikidata:Property proposal/initialism is less confused. --Flominator (talk) 10:03, 30 August 2024 (UTC)Reply
That sounds like what the label is for. The label is supposed to be the name the item is most commonly known by (see Help:Label). We normally use official name (P1448) for the long/formal name. I don't know why United States of America (Q30) has the formal name as the label, even the English Wikipedia article is "United States". - Nikki (talk) 05:28, 1 September 2024 (UTC)Reply

Mass-import policy

edit

Hi I suggested a new policy similar to OpenSteetMap for imports in the telegram chat yesterday and found support there.

Next steps could be:

  • Writing a draft policy
  • Deciding how many new items users are allowed to make without seeking community approval first.

The main idea is to raise the quality of existing items rather than import new ones.

I suggested a limit of 100 items or more to fall within this new policy. @nikki: @mahir256: @ainali: @kim: @VIGNERON: WDYT? So9q (talk) 12:11, 31 August 2024 (UTC)Reply

@So9q: 100 items over what time span? But I agree there should be some numeric cutoff to item creation (or edits) over a short time period (day, week?) that triggers requiring a bot approval at least. ArthurPSmith (talk) 00:32, 1 September 2024 (UTC)Reply
QuickStatements sometimes returns the error message Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes.
M2k~dewiki (talk) 02:07, 1 September 2024 (UTC)Reply
Time span does not really matter, intention does.
Let me give you an example: I recently imported less than 100 banks when I added Net-Zero Banking Alliance (Q129633684). I added them during 1 day using OpenRefine.
Thats ok. It's a very limited scope, we already had most of the banks. It can be discussed if the new banks which are perhaps not notable should have been created or just left out, but I did not do that as we have no policy or culture nor space to talk about imports before they are done. We need to change that.
Other examples:
  • Importing all papers in a university database or similar totaling 1 million items over half a year using automated tools is not ok without prior discussion no matter if QS or a bot account was used.
  • Importing thousands of books/monuments/whatever as part of a GLAM project over half a year is not ok without prior discussion.
  • Importing all the bridges in the Czech Republic e.g. Q130213201 during whatever time span would not be ok without prior discussion. @ŠJů:
  • Importing all hiking paths of Sweden e.g. Bohusleden (Q890989) over several years would not be ok.
etc.
The intention to import many object without prior community approval is what matters. The community is your boss, be bold when editing but check with your boss before mass-imports. I'm pretty sure most users would quickly get the gist of this policy. A good principle could be: If in doubt, ask first. So9q (talk) 05:16, 1 September 2024 (UTC)Reply
@So9q: I'm not sure that the mention of individually created items for bridges belongs to this mass import discussion. There exist Wikidata:Notability policy and so far no one has questioned the creation of entries for physically existing objects that are registered, charted, and have or may/should have photos or/and categories in Wikimedia Commons. If a community has been working on covering and processing any topic for twenty years, it is probably not appropriate to suddenly start questioning it. I understand that what is on the agenda in one country may appear unnecessarily detailed in another one. However, numbered roads, registered bridges or officially marked hiking paths are not a suitable example to question; their relevance is quite unquestionable.
The question of actual mass importation would be moot if the road administration (or another authority) published the database in an importable form. Such a discussion is usually led by the local community - for example, Czech named streets were all imported, but registered addresses and buildings were not imported generally (but individual items can be created as needed). Similarly, the import of authority records of the National Library, registers of legal entities, etc. is assessed by the local community; usually the import is limited by some criteria.. It is advisable to coordinate and inspire such imports internationally, however, the decision is usually based on practical reasons, i.e. the needs of those who use the database. It is true that such discussions could be more transparent, not just separate discussions of some working group, and it would be appropriate to create some formal framework for presenting and documenting individual import projects. For example, creating a project page that will contain discussion, principles of the given import, contact for the given working group, etc., and the project page should be linked from edit summaries. --ŠJů (talk) 06:03, 1 September 2024 (UTC)Reply
Thanks for chipping in. I do not question the notability of the items in themselves. The community in telegram has voiced the opinion that this whole project has to consider what we want to include and not and when and what to give priority.
Millions of our current items are in a quite sad state as it is. We might not have the man-power to keep the level of quality at an acceptable level as is.
To give you one example Wikidata currently does not know what Swedish banks are still in operation. Nobody worked on the items in question, see Wikidata:WikiProject_Sweden/Banks, despite them being imported many years ago (some are from 2014) from svwp.
There are many examples to draw from where we only have scratched the surface. @Nikki mentioned in Telegram that there are a ton of items with information in descriptions not being reflected by statements.
A focus on improving what we have, rather than inflating the total number of items, is a desire by the telegram community.
To do that we need to discuss imports whether already ongoing or not, whether very notable or not that notable.
Indeed increased steering and formality would be needed if we were to undertake having an import policy in Wikidata. So9q (talk) 06:19, 1 September 2024 (UTC)Reply
Just as a side note, with no implications for the discussion here, but "The community in telegram has voiced" is irrelevant, I understood. Policies are decided here on the wiki, not on Telegram. Or? --Egon Willighagen (talk) 07:58, 1 September 2024 (UTC)Reply
It is correct that policies are created on-wiki. However, it may also be fair to use that as a prompt to start a discussion here and transparent to explain if that is the case. It won't really carry weight unless the same people also voice their opinions here, but there is also no reason to belittle it just because people talked somewhere else. Ainali (talk) 08:13, 1 September 2024 (UTC)Reply
+1. I'll add that the Wikidata channel is still rather small compared to the total number of active Wikidata editors (1% or less is my guess). Also the frequency of editors to chat is very uneven. A few very active editors/chatmembers contribute most of the messages (I'm probably one of them BTW). So9q (talk) 08:40, 1 September 2024 (UTC)Reply
Sorry, I did not want to imply that discussion cannot happen elsewhere. But we should not assume that people here know what was discussed on Telegram. Egon Willighagen (talk) 10:36, 1 September 2024 (UTC)Reply
Terminology matters, and the original bot policies are probably not clear anymore to the current generation Wikidata editors. With tools like OpenRefine and QuickStatements), I have the impression it is no longer clear what is "bot" and what is not. You can now easily create hundreds of items with either of these tools (and possibly others) in a editor-driven manner. I agree it is time to update the Wikidata policies around important. One thing to make clear is the distinction mass creation of items and mass import (the latter can also be mass importing annotations and external identifiers, or links between items, without creating items). -- Egon Willighagen (talk) 08:04, 1 September 2024 (UTC)Reply
I totally agree. Since I joined around 2019 I have really struggled to understand what is okay and not when it comes to mass-edits and mass-imports. I have had a few bot requests declined. Interestingly very few of my edits have ever been questioned. We should make it simple and straightforward for users to learn what is okay and what is not. So9q (talk) 08:43, 1 September 2024 (UTC)Reply
I agree that we need an updated policy that is simple to understand. I also really like the idea of raising the quality of existing items. Therefore, I would like the policy to recommend that, or to even make an exception for preapproval if, there is a documented plan to weave the imported data into the existing data in a meaningful way. I don't know exactly how it could be formulated, but creating inbound links and improving the data beyond the source should be behavior we want to see, whereas just duplicated data on orphaned items is what we don't want to see. And obviously, these plans need to be completed before new imports can be made, gaming the system will, as usual, not be allowed. Ainali (talk) 08:49, 1 September 2024 (UTC)Reply
@ainali: I really like your idea of "a documented plan to weave the imported data into the existing data in a meaningful way". This is very similar to the OSM policy.
They phrase it like so:
"Imports are planned and executed with more care and sensitivity than other edits, because poor imports have significant impacts on both existing data and local mapping communities." source
A similar phrasing for Wikidata might be:
"Imports are planned and executed with more care and sensitivity than other edits, because poor imports have significant impacts on existing data and could rapidly inflate the number of items beyond what the community is able or willing to maintain."
WDYT? So9q (talk) 08:38, 5 September 2024 (UTC)Reply
In general, any proposal that adds bureaucracy that makes it harder for people to contribute should start wtih explaining what problem it wants to solve. This proposal contains no such analysis and I do consider that problematic. If there's a rule 10,000 items / year seems to me more reasonable than 100. ChristianKl15:37, 2 September 2024 (UTC)Reply
Thanks for pointing that out. I agree. That is the reason for me to raise the discussion here first instead of diving right into a writing a RfC.
The community in Telegram seems to agree that a change is needed and has pointed to some problems. One of them mentioned by @Nikki: was: most of the manpower and time in WMDE for the last couple of years seem to be spent on trying to avoid a catastrophic failure of the infrastructure rather than improving the UI, etc. At the same time a handful of users have imported mountains of half-ass quality data (poor quality import) and show little or no sign of willingness to fix the issues pointed out by others in the community. So9q (talk) 08:49, 5 September 2024 (UTC)Reply

There are many different discussions going on here on Wikidata. Anyone can open a discussion about anything if they feel the need. External discussions outside of Wikidata can evaluate or reflect Wikidata project, but should not be used to make decisions about Wikidata.

This discussion scope is a bit confusing. By mass import I mean a one-time machine conversion of an existing database into wikidata. However, the examples given relate to items created manually and occasionally over a long period of time. In relation to this activity, they do not make sense. If 99 items of a certain type are made in a few years, everything is fine, and as soon as the hundredth item have to be made, we suddenly start treating the topic as "mass import" and start demanding a previous discussion? That makes absolutely no sense. For this, we have the rules of notability, and they apply already for the first such item, they have no connection with "mass imports".

As I mentioned above, I would like to each (really) mass import have its own documentation project page, from which it would be clear who did the import, according to what policies, and whether someone is taking care of the continuous updating of the imported data. It is possible to appeal to mass importers to start applying such standards in their activities. It is also possible to mark existing items with some flags that indicate which specific workgroup (subproject) takes care of maintaining and updating the item. --ŠJů (talk) 18:06, 1 September 2024 (UTC)Reply

Maybe using the existing "bot requests" process is overkill for this (applying for a bot flag shouldn't be necessary if you are just doing QS or Openrefine work), but it does seem like there should be either some sort of "mass import requests" community approval process, or as ŠJů suggests, a structural prerequisite (documentation on a Wikiproject or something of that sort). And I do agree if we are not talking about a time-limited threshold for this then 100 is far too small. Maybe 10,000? ArthurPSmith (talk) 22:55, 1 September 2024 (UTC)Reply
There are imports based on existing identifiers - these should be documented on property talkpages (e.g. new mass import of newly created identifiers every month, usually using QS). Next big group is import of existing geographic features (which can be photographed) - these have coordinates, so are visible on maps. Some of them are in focus of few people only. Maybe document them in country wikiproject? JAn Dudík (talk) 15:49, 2 September 2024 (UTC)Reply


My thoughts on this matter :
  • we indeed need a page (maybe a policy, maybe just a help page, a recommandation, a guideline, etc.) to document how to do good mass-import
  • mass-import should be defined in more precise terms, is it only creation? or any edits? (they are different but both could be problematic and should be documented)
  • 100 items is very low
    • we are just the 2nd of September and 3 people already created more than 100 items ! in August, 215 people created 100 or more items, the community can't process that much
    • I suggest at least 1000, maybe 10 000 items (depends if we focus only on creations or on any edits)
  • no time span is strange, is it even a mass-import if someone create one item every month for 100 months? since most mass-import are done by tools, most are done in a small period, a time span of a week is probably best
  • the quantity is a problem but the quality should also be considered, also the novelty (it's not the same thing to create/edit items following a well know model and to create a new model rom scratch, the second need more review)
  • could we act on Wikidata:Notability, mass-import should be "more" notable? or at least, notability should be more thoroughly checked?
    • the 2 previous point are based on references which is often suboptimal right now (most imports are from one source only, when crossing multiple references should be encouraged if possible)
  • the bot policies (especially Wikidata:Requests for permissions/Bot) probably need an update/reform too
  • finally, there is a general problem concerning a lot of people but there is mainly a few power-user who are borderline abusing the ressources of Wikidata ; we should focus the second before burden the second, it would be easier and more effective (ie. dealing with one 100 000 items import rather than with 1000 imports of 100 items).
Cheers, VIGNERON (talk) 09:20, 2 September 2024 (UTC)Reply
In my opinion, items for streets are very useful, because there are a lot of pictures with categories showing streets. There are street directories. Streets often have their own names, are historically significant, buildings/cultural heritage monuments can be found in the respective street via street categories and they are helpful for cross-referencing. So please keep items for streets. Triplec85 (talk) 10:26, 5 September 2024 (UTC)Reply
As streets are very important for infrastructure and for structuring of villages, towns, cities, yes, they are notable. Especially if we consider the historical value of older streets or how often they are used (in the real world). Data objects of streets can be combined with many identificators from origanizations like OpenStreetView and others. And they are a good element for categorizing. It's better to have images categorized in Hauptstraße (Dortmund) (with a data object that offers quick facts) than only Dortmund. Also, streets are essential for local administrations, which emphasizes the notability. And you can structurize where the street names come from (and how often they are used) etc. etc., with which streets they are connected etc. For me, I see many good reasons for having lists of streets in Commons, Wikidata, whatever; it gives a better overview, also for future developments, when streets are populated later or get cultural heritage monuments, or to track the renaming of streets... --PantheraLeo1359531 (talk) 10:37, 5 September 2024 (UTC)Reply
Regarding the distribution / portion of content by type (e.g. scholary articles vs. streets / architectal structure) see this image:
 
Content on Wikidata by type
M2k~dewiki (talk) 10:41, 5 September 2024 (UTC)Reply
Better to consider the number of potential items. ScienceOpen has most studies indexed and stands at 95 M items (getting to the same degree of completion would unlock several use-cases of Wikidata like Scholia charts albeit not substituting all the ones the mentioned site can be used for as the abstract text content as well as altmetrics scores etc are not included here). 100 M isn't that large and I think there are more streets than there are scholarly articles – having a number there would be nice.
-
I think of Wikidata usefulness beyond merely linking Wikipedia articles in this way: what other widely-used online databases exist and can we do the same but better and more? Currently, Wikidata can't be used to fetch book metadata into your ebook reader or song metadata into your music player/library, can't be used for getting food metadata in food tracking apps, can't tell you the often problematic ingredients of cosmetics/hygiene products, can't be used to routinely monitor new studies of a field or search them, or pretty much anything else that is actually useful to real people so I'd start working on the data coverage of such data first before importing lots of data with unknown questionable potential future use or manual item creation/editing. If we got areas covered that people actually use and need, then we could still improve areas where no use-cases yet exist or which if at all only slightly improve the proprietary untransparent-algorithm Google Web search results (that don't even index files & categories on Commons). I'd be interested in how other people think about WD's current and future uses but discussing that may be somewhat outside the scope of this discussion. Prototyperspective (talk) 13:35, 7 September 2024 (UTC)Reply
I use streets a lot to qualify locations, especially in London - see Ambrose Godfrey (Q130211790) where the birth and death locations are from the ODNB. - PKM (talk) 23:16, 5 September 2024 (UTC)Reply
Disagree on books and papers then – they need to be imported to enable all sorts of useful things which are not possible or misleading otherwise such as the statistics of Scholia (e.g. research field charts, author timeline/publications, etc etc).
I think papers are mostly a (nearly-)all-or-nothing thing – they aren't that useful before where I don't see much of a use-case. Besides charts, one could query them in all sorts of interesting ways once they are fairly complete, embed results of queries (e.g. studies by author sortable by citations & other metrics on the WP article about the person).
When fairly complete and unvandalized, they could also be analyzed, e.g. for AI-supported scientific discovery (there's studies on this) and be semantically linked & queried and so on.
It's similar for books. I don't know how Wikidata could be useful in that space if it doesn't contain at least as many items with metadata than other websites. For example one could then fetch metadata from Wikidata instead of from these sites. In contrast to studies, I currently see an actual use-case for WD items for streets – they may be useful at some point but I don't see why now or in the near future or how. Prototyperspective (talk) 00:31, 6 September 2024 (UTC)Reply

Hello, from my point of view, there would be some questions regarding such a policy, like:

  • What is the actual goal of the policy? What is is trying to achive? How will this goal be achivied?
  • Is it only a recommendation for orientation (which easily can be ignored)?
  • Is this policy realized in a technical manner, so users are blocked automatically? Currently, for example QuickStatements already implements some policy and disables the creation of new objects with the error massage: Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes. How will the new policy be different from the current anti-abuse policy?
  • Who will control and decide, if the policy if followed/ignored and how? What are the consequences if the policy is ignored?
  • Does this policy only include objects without sitelinks or also objects with sitelinks to any language version or project like wikisource, commons, wikivoyage, wikibooks, ...?
  • Does this policy only concern the creation of new objects or also the modification of existing objects?
  • How is quality defined regarding this policy? How and by whom will be decided if a user and/or user task is accepted for a higher limit?
  • There are always thousands of unconnected articles and categories in any language version and project (including commons), for example
  • https://wikidata-todo.toolforge.org/duplicity/#/
 
AutoSuggestSitelink-Gadget

Who will connect them to existing objects (if existing) or create new objects if not yet existing and when (especially, if there is a new artificial limit to create such objects)? Will someone implement and operate a bot for all 300 wikipedia language versions and all articles, all categories (including commonscats), all templates, all navigation items, ... to connect sitelinks to existing objects or create new objects if not yet existing?

From my point of view, time and ressources should be spent on improving processes and tools and help, support and educate people in order to improve data quality and completeness. For example, in my opionion the meta:AutosuggestSitelink-Gadget should be activated for all users on all language versions per default in the future.

Some questions and answers which came up over the last years (in order to help, educate and support users) can be found at

M2k~dewiki (talk) 19:24, 2 September 2024 (UTC)Reply
For example, the functionality of
could also be implemented as bot in the future by someone. M2k~dewiki (talk) 21:13, 2 September 2024 (UTC)Reply
This wasn't written but when the discussion started here, but here is a summary of the growth of the databases, that this policy partly addresses: User:ASarabadani (WMF)/Growth of databases of Wikidata. There are also some relevant links on Wikidata:WikiProject Limits of Wikidata. For an extremely high overview summary, Wikidata is growing so quick that we will hit various technical problems and slowing down the growth (perhaps by prioritizing quality over quantity) is a way to find time to address some of the problems. So the problem is wider than just new item creations, but slowing that would certainly help. Ainali (talk) 08:09, 3 September 2024 (UTC)Reply
This has been also recently discussed at
Possible solutions could be:
 
M2k~dewiki (talk) 08:18, 3 September 2024 (UTC)Reply
Just a note that the split seems to have happened now, so some more time is bought.
Ainali (talk) 21:14, 3 September 2024 (UTC)Reply
Please note that "Cannot automatically assign ID: As part of an anti-abuse measure, this action can only be carried out a limited number of times within a short period of time. You have exceeded this limit. Please try again in a few minutes." is not a error message or limit imposed from QuickStatement; it is a rate limit set by Wikibase, see phab:T272032. QuickStatement should be able to run a large batch (~10k commands) in a reasonable (not causing infra issue) speed. If QuickStatements does not retry when rate limit is hit, I consider it a bug; batches should be able to be run unattended with error recovery mechanism GZWDer (talk) 14:50, 4 September 2024 (UTC)Reply
Thanks for linking to this report. I see that revisions is a very large table. @Nikki linked to this bot that does 25+ single edits to the same astronomical item before progressing to the next. This seems very problematic and given the information in that page, this bot should be stopped immediately. So9q (talk) 09:12, 5 September 2024 (UTC)Reply
For what reason? That he is doing his job? Matthiasb (talk) 00:53, 6 September 2024 (UTC)Reply
No, because it is wasting resources when doing the job. The bot could have added them in one edit, and instead it added unnecessary rows to the revisions table, which is a problem. Ainali (talk) 06:00, 6 September 2024 (UTC)Reply

I would echo the points raised above by M2k~dewiki. My feeling is that when we actually think about the problem we are trying to solve in detail, there will be better solutions than placing arbitrary restrictions on upload counts. There are many very active editors who are responsible and actually spend much of their time cleaning up existing issues, or enriching and diversifying the data. Many GLAMs also share lots of data openly on Wikidata (some exclusively so), helping to grow the open knowledge ecosystem. To throttle this work goes against everything that makes Wikidata great! It also risks fossilising imbalances and bias we know exist in the data. Of course there are some folks who just dump masses of data into Wikidata without a thought for duplication or value to the wider dataset, and we do need a better way to deal with this but I think that some automated checks of mass upload data (1000s not 100s) to look for potential duplication, inter connectivity and other key indicators of quality might be more effective at flagging problem edits and educating users, whilst preserving the fundamental principals of Wikidata as an open, collaborative data space. Jason.nlw (talk) 08:34, 3 September 2024 (UTC)Reply

We definitely need more clarity on the guidelines, thanks for putting that up! Maybe we can start with some very large upper boundary so we can at least agree in the principle and enforcement tooling? I suggest we start with a hard limit for 10k new items / month for non-bot accounts + wording saying creation of over 1000 items per month should be preceded by a Bot Request if items are created based on the same source and any large scale creation of items (e.g. 100+ items in a batch) should be at least discussed on Wiki, e.g. on the related WikiProject. Also, I think edits are more complicated than new item creations; author-disambiguator.toolforge.org/, for example, allows users to make 100k edits in a year semi-manually. It may be a good idea for simplicity to focus only on item creation at this point. TiagoLubiana (talk) 21:00, 3 September 2024 (UTC)Reply

User statistics can be found for example at:
Recent batches, editgroups, changes and creations can be found for example at:
Also see
M2k~dewiki (talk) 21:41, 3 September 2024 (UTC)Reply
Including bots:
M2k~dewiki (talk) 21:45, 3 September 2024 (UTC)Reply
@DaxServer, Sldst-bot, Danil Satria, LymaBot, Laboratoire LAMOP: for information. M2k~dewiki (talk) 13:23, 4 September 2024 (UTC)Reply
@Kiwigirl3850, Romano1920, LucaDrBiondi, Fnielsen, Arpyia: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)Reply
@1033Forest, Brookschofield, Frettie, AdrianoRutz, Luca.favorido: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)Reply
@Andres Ollino, Stevenliuyi, Quesotiotyo, Vojtěch Dostál, Alicia Fagerving (WMSE): for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)Reply
@Priiomega, Hkbulibdmss, Chabe01, Rdmpage, Aishik Rehman: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)Reply
@Cavernia, GZWDer, Germartin1, Denelson83, Epìdosis: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)Reply
@DrThneed, Daniel Mietchen, Matlin: for information. M2k~dewiki (talk) 13:24, 4 September 2024 (UTC)Reply
Just a first quick thought (about method and not the content): apart from pings (which are very useful indeed), I think that the best place to make decisions on such an important matter would be an RfC; IMHO a RfC should be opened as soon as possible and this discussion should be moved to in its talk page, in order to elaborate there a full set of questions to be then clearly asked to the community. Epìdosis 13:49, 4 September 2024 (UTC)Reply
Also see
M2k~dewiki (talk) 16:55, 4 September 2024 (UTC)Reply
I don't really understand what the proposal here is or what problem it is trying to solve, there seem to be quite a number of things discussed.
That said, I am absolutely opposed to any proposal that places arbitrary limits on individual editors or projects for new items or edits simply based on number rather than quality of edits. I think this will be detrimental because it incentivises working only with your own data rather than contributing to other people's projects (why would I help another editor clean up their data if it might mean I go over my quota so can't do the work that is more important to me?).
And in terms of my own WikiProjects, I could distribute the new item creation to other editors in those projects, sure, but to what end? The items still get created, but with less oversight from the most involved and knowledgeable editor and so likely with greater variability in data quality. How is that a good thing? DrThneed (talk) 23:50, 4 September 2024 (UTC)Reply
  • For some reason I think that some/more/many of the discutants on this have a wrong understanding on how Wikidata works – not technically, but in coresspondence with other WM projects as Wikipedia. In fact we have the well established rule that each Wikipedia article deserves an item. I don't know how man populated places articles LSJ bot created or how many items we have which are populated places, but citing earlier discussions in the German Wikipedia years ago about how far Wikioedia can expand I calculated the the number of populated places on earth might exceed 10 millions. So would creating 10 million WParticles on populated places cause a mass upload on Wikidata since every WP article in any language is to be linked to other language versions via Wikidata? (I also kind of calculated the possible number of geographic features on earth with nearly two millions of it in the U.S. alone. I think it are up to 100 million on the earth totally. We consider all of them notable, so at some point we will have items on 100 million geoagraphic features caused by interwiki. Is this mass upload? Shall we look on cultural heritage? So, in the U.S., the United Kingdom, France and Germamy together exist about one million buildings which are culturally protected in one way or another. At this time some 136.080 of them or elsewhere in the world have articles in the German WP, and yes, they are linked in Wikidata. Counting all other countries together some more millions will add to this.
  • When I started in German WP, some 18 years ago, it hat some 800.000 articles or so. At that time we hat users who tried hardly to refrain the sice of the German WP back to 500.000 articles. They failed. At some point before the end of this year the number of articles in the German WP will exceed 3.000.000, with the French WP following to the same marker some four or five months later. Though many of those items might already exist in one language version or more, many of those articles to the three million mark might not have a Wikidata item yet. Are these, say, 100.000 items mass upload?
  • And, considering a project I am involved with, GLAM activities on Caspar David Friedrich (Q104884) that is, the Hamburg Caspar David Friedrich. Art for a New Age Hamburger Kunsthalle 2023/24 (Q124569443) featured some 170 or so works of the artist, all of the considered notable in the WP sense of notability but we need all of them in Wikidata anyways for reasons of proveniency research for which WD is essential. So if on the way I am creating 50, 70 or 130 items and linking them up with their image files on Commons and individual WP languages which can be found, so I am committing the crime of mass upload, even if the process take weeks and weeks because catalogues of different museums use different namings and identifying can be done only visually?

Nope. When talking about the size of Wikidata then we must take it as given that in the 2035 Wikidata is at least ten times bigger then today. If we are talking about better data, I agree with this as an important goal, which needs to add data based on open sources which do not rely on Wikis but original data from the source, e.g. statistical offices and which get sourced in a proper way. (But when I called for sources some months ago one laughed and said Wikidata is not about sourcing data but collecting them. Actually that user should have been banned for eternity and yet another three days.) Restricting upload won't work and even might prevent adding high quality data. --Matthiasb (talk) 00:48, 6 September 2024 (UTC)Reply

@Matthiasb In an ideal world, your points make a lot of sense. However, the technical infrastructure is struggling (plenty of links to those discussions above), and if we put ideals over practicalities, then we will bring Wikidata down before the developers manage to solve them. And then we will definitely not be ten times bigger in 2035. Hence, the thoughts about slowing the growth (hopefully only temporarily). If we need to prioritize, I would say that maintaining sitelinks goes above any other type of content creation and would be the last one to slow down. Ainali (talk) 06:08, 6 September 2024 (UTC)Reply
No objections to the latter part. It was in the second half of the 2000 years when Mediawiki was struggling by its own success, users getting time outs all the time. Yet Tim Starling told us something like "dont't care about resources". Well he said something slightly differrent, but I don't remember the actual wording. The core of his remarks was that we as a community should not make our head aches on it. When it comes to lacking resources it would be his work and that of the server admins and technical admins to fix it. (But if it got necessary to act then we should do what they say.) Matthiasb (talk) 11:31, 6 September 2024 (UTC)Reply
You probably refer to these 2000s quotes: w:Wikipedia:Don't worry about performance. --Matěj Suchánek (talk) 17:29, 6 September 2024 (UTC)Reply
Maybe we need the opposite for Wikidata?
If that mindset is carrying over to use of automated tools (e.g. to create every tree in OpenStreetMap, there are 26,100,742 as of 2024-09-07, and link them to other features in OSM/Wikidata) that would very quickly become totally technically unsustainable. Imagine every tree in OSM having a ton of statements like the scientific articles. That quickly becomes a mountain of triples.
I'm not saying we could not do it, perhaps even start today, but what is the combined human and technical cost of doing it?
What other items do we have to avoid importing to avoid crashing the system?
What implications would this and other mass-imports have on the current backend?
Would WMF split the graph again? -> Tree subgraph?
How many subgraphs do we want to have? 0?
Can they easily (say in QLever in less than an hour) be combined again or is that non-trivial after a split?
Is a split a problem in itself or perhaps just impetus for anyone to build a better graph backend like QLever or fork Blazegraph and fix it?
We need to discuss how to proceed and perhaps vote on new policies to avoid conflict and fear and eventually perhaps a total failure of the community with most of the current active users leaving.
Community health is a thing, how are we doing right now? So9q (talk) 08:30, 7 September 2024 (UTC)Reply
Yes, thank you @Matěj Suchánek! --Matthiasb (talk) 11:22, 7 September 2024 (UTC)Reply
Do we have a community? Or better asked, how many communities do we have? IMHO there are a least three different commmunites:
  • people formerly being active on Wikipedia who came over while their activity as interwiki bot owner wasn't needed anymore in Wikipedia; most of them will likely have tens of thousands of edits each year.
  • Wikipedia users occasionally active in Wikidata; some only might fix issues, some other might prepare further usage of Wikidata in their own Wikipedia. Most of them have from several hundreds or a few thousand edits.
  • Users which don't fit in the former two groups but use Wikidata for some external project, far away from WMF. I mentioned above groups of museums world-wide for whom WD is a easy accessable data bass providing the infrastructure need in provenience research. I don't think that this group is big, and they might edit selected items. Probably hundreds to few thousands edits. This group might or might not colloborate with the former.
Some months back I saw a visualization of how in the English Wikipedia WikiProjects create sub-communities, part of them overlapping, other not overlapping at all, about 50 of theme are of notable size. Maybe in this three classes of communities as I broke it down above several subcommunities exists also a set of sub-communities, with more or less interaction. I can't much comment your other questions. I don't know about OSM more than looking up the map. I have no clue how wikibase works.
Peeking over the horizon we see some performance issues at Commons for some time now. People begin to turn away from Commons it seems because the for example have been photographing at an event hundreds of photographs but batch upload doesn't work. Wikinews editors are waiting on the uploads which do not come. Well, they won't wait of course. They won't write articles on those events. So the commons issue also affects Wikinews. By the way, restrictions drive away users as well. That's the reason why or how German Wikiquote has killed itself several months ago.
As I said, I don't know what and which effects the measures you mentioned will have on Wikidata users and on users of other WMF projects and on this "external" user community I mentioned above. Matthiasb (talk) 11:58, 7 September 2024 (UTC)Reply
And just another thought: If we want to have better data we should prohibit uploading data based on some Wikipedia only and maybe even remove statements sourced with WP only, after some transition, say end of 2026. We don't need to import population data for U.S. settlements, for example, from any Wikipedia, if the U.S. Census Bureau is offering ZIP files for every state containing all that data. (However the statement URL does not need a source as it is its source itself.) We should also enforce that users add the language of the sources, many users are neglecting it. (And I see some wrong directed mentality that "english" as source language is not needed at all but Wikidata isn't an english language database, is it?) Matthiasb (talk) 14:05, 7 September 2024 (UTC)Reply
I totally agree with @Ainali. We need a healthy way for this community to operate and make decisions that take both human and technical limits into account. So9q (talk) 08:16, 7 September 2024 (UTC)Reply

Utilizing Wikidata for Enhanced Game Development

edit

As someone involved in game development, especially with creative games like Toca Boca, I’ve been thinking about how we can better utilize Wikidata to support and enhance our projects. Wikidata’s structured data could be incredibly valuable for game developers, particularly when it comes to organizing in-game data, tracking character relationships, or even managing large amounts of game-related content. For example, imagine using Wikidata to dynamically update in-game databases or to create more interactive and data-driven gaming experiences. This could also help in maintaining consistency across different game versions and localizations. Has anyone here explored using Wikidata in game development, or do you have any thoughts on how we could leverage its capabilities in this field? I’d love to hear about any experiences or ideas you might have. Stephan0098 (talk) 22:33, 31 August 2024 (UTC)Reply

@Stephan0098: I'm a bit sceptical if all of that data would meet our notability policy, especially if the game is not yet published. However, the software behind Wikidata (Wikibase) is free and open for everone to use and customize. I would encourage you to have a look at that :) Samoasambia 00:58, 1 September 2024 (UTC)Reply

Dating for painting at Q124551600

edit

I have a painting that is dated by the the museum as "1793 (1794?)", so it seems it was likely made in 1793 but there is a small chance that it was only made in 1794. When I type them both I get an error report. How to fix that? How to give one the preffered rank, I don't find a fitting field. Carl Ha (talk) 06:42, 1 September 2024 (UTC)Reply

Mark the one with the highest chance as "preferred"? And add a 'reason' qualifier to indicate it preference is based on higher chance? The "deprecated" qualifier (reason for deprecated rank (P2241)) for the statements has hundreds of reasons (there is list of Wikidata reasons for deprecation (Q52105174) but I am not sure it is complete; I think my SPARQL query earlier this week showed many more). Similarly, there is reason for preferred rank (P7452) and maybe most probable value (Q98344233) is appropriate here. Egon Willighagen (talk) 08:09, 1 September 2024 (UTC)Reply
How do I mark it as preferred? Which qualifier do I use? Carl Ha (talk) 08:11, 1 September 2024 (UTC)Reply
Ranking is explained here: https://www.wikidata.org/wiki/Help:Ranking
I would suggest the qualifier property reason for preferred rank (P7452) with the value most probable value (Q98344233). Egon Willighagen (talk) 10:42, 1 September 2024 (UTC)Reply
Thank you! Carl Ha (talk) 11:12, 1 September 2024 (UTC)Reply
What should be do if we have a work where there is no consensus by art historians what the "preferred" dating is? The dating of Q570188 is disputed but Wikidata wants me to prefer one statement. Carl Ha (talk) 08:38, 2 September 2024 (UTC)Reply
@Carl Ha I don't think Wikidata constraints are written in stone and the real world sometimes brings challenges that no constraint can predict. In this case, in my view, you can disregard the exclamation marks, just leave it as it is for now. Wikidata constraints are here to serve us, not the other way round. Vojtěch Dostál (talk) 06:38, 5 September 2024 (UTC)Reply

Inclusion of icons to Wikidata:WikiProject sum of all paintings/Collection/Russian Museum

edit

Could we include things that have P31 as "icons" (Q132137) as this is a type of painting. I don't know how to include that technically in the wiki code. Carl Ha (talk) 08:36, 1 September 2024 (UTC)Reply

It now works, I had a typo. Carl Ha (talk) 18:35, 5 September 2024 (UTC)Reply
I tried and now it seems that it just includes all elements including sculptures etc.

Would it be possible to have in the column "inventory number" has just the inventory numbers connected with Art Culture Museum Petrograd and not the ones connected with other institutions (in this case always the Russian Museum) that later owned the paintings? Because now it is not sortable after the inventory number of the Art Culture Museum. Thanks! Carl Ha (talk) 09:40, 1 September 2024 (UTC)Reply

WDQS infrastructure

edit

I am curious over currently used infrastructure for running WDQS and its costs? Where could I find this information? Zblace (talk) 18:05, 1 September 2024 (UTC)Reply

Try here this page has a summary https://diff.wikimedia.org/2023/02/07/wikimedia-enterprise-financial-report-product-update/ Baratiiman (talk) 05:37, 2 September 2024 (UTC)Reply
That has nothing to do with the Query Service Zblace is asking for. LydiaPintscher (talk) 07:40, 2 September 2024 (UTC)Reply
I'm also interested in that information. It would be interesting to see expenditures over time and number of machines/VMs. So9q (talk) 08:13, 7 September 2024 (UTC)Reply

Persia

edit

Persian langauge is missing i cant add Baratiiman (talk) 05:35, 2 September 2024 (UTC)Reply

Already exists: Q9168 Carl Ha (talk) 06:58, 2 September 2024 (UTC)Reply
or do you mean in the box at the top of each item? there you have to type "fa" as language code. Carl Ha (talk) 06:59, 2 September 2024 (UTC)Reply
@Baratiiman: what are you talking about? On items like femininity (Q866081) you did edit in Persian. But on Hawk tuah (Q127159727) you wrongly edited in English. Is your interface in Persian? If so, you should see Persian. Cheers, VIGNERON (talk) 09:23, 2 September 2024 (UTC)Reply
@Baratiiman: You may want to add a Babel template like {{#babel:fa}} to your user page. Alternatively, enable LabelLister.--GZWDer (talk) 11:25, 2 September 2024 (UTC)Reply

Why does this list not properly sort?

edit

Wikidata:WikiProject sum of all paintings/Exhibitions/0,10 The first column should be sorted by number but they are in wrong order. Carl Ha (talk) 06:57, 2 September 2024 (UTC)Reply

Proposal to remove all case data from all "COVID-19 in <Place>" items

edit

Special:Contributions/CovidDatahubBot added a number of statements about COVID-19 cases in items such as Q83873387. Such data are now largely out-of-date and boost the item to the limit Wikidata can handle (and thus long not updated). It is better expressed such data in Commons dataset instead. Also, many item can not be edited further since it is reaching the size limit of Wikidata items, and causes issues like phab:T373554. GZWDer (talk) 13:28, 2 September 2024 (UTC)Reply

I agree that Tabular Data is a better way to store this data. While this is really part of a bigger problem (see Special:LongPages), it's good to explore simple solutions first. The removals should be performed in batches to reduce the number of edits made (if this proposal gets accepted). Dexxor (talk) 09:33, 3 September 2024 (UTC)Reply
Agree Vojtěch Dostál (talk) 09:56, 3 September 2024 (UTC)Reply
@GZWDer and @Dexxor , after deleting all the outdated data on Q83873387, my bot was able to link the article to the item using Pywikibot. I'm the reporter of the mentioned Phabricator ticket. I just wanted to mention that there is en:w:Template:COVID-19 data/data (see here for a clearer view of the data) updated daily by a bot, with the last update on 17 August 2024. I believe there will be no more updates from the endpoint. Thanks. Aram (talk) 19:27, 3 September 2024 (UTC)Reply

Announcing the Universal Code of Conduct Coordinating Committee

edit
Original message at wikimedia-l. You can find this message translated into additional languages on Meta-wiki. Please help translate to your language

Hello all,

The scrutineers have finished reviewing the vote and the Elections Committee have certified the results for the Universal Code of Conduct Coordinating Committee (U4C) special election.

I am pleased to announce the following individual as regional members of the U4C, who will fulfill a term until 15 June 2026:

  • North America (USA and Canada)
    • Ajraddatz

The following seats were not filled during this special election:

  • Latin America and Caribbean
  • Central and East Europe (CEE)
  • Sub-Saharan Africa
  • South Asia
  • The four remaining Community-At-Large seats

Thank you again to everyone who participated in this process and much appreciation to the candidates for your leadership and dedication to the Wikimedia movement and community.

Over the next few weeks, the U4C will begin meeting and planning the 2024-25 year in supporting the implementation and review of the UCoC and Enforcement Guidelines. You can follow their work on Meta-Wiki.

On behalf of the U4C and the Elections Committee,

RamzyM (WMF) 14:05, 2 September 2024 (UTC)Reply

Wikidata weekly summary #643

edit

Help the Wikimedia Foundation learn more about on-wiki collaborations

edit

The Campaigns team at the Wikimedia Foundation is exploring how to expand it's work on campaigns, to support other kinds of collaboration. We are interested in learning from diverse editors that have experience joining and working on WikiProjects, Campaigns, and other kinds of on-wiki collaboration. We need your help:

Whatever input you bring to the two spaces will help us make better decisions about next steps beyond the current tools we support. Astinson (WMF) (talk) 18:54, 2 September 2024 (UTC)Reply

Label of P813

edit

Hi! I think the label of P813 was changed by mistake. It has Arabic in an English field. Thanks WhisperToMe (talk) 22:09, 2 September 2024 (UTC)Reply

It got fixed. Thank you WhisperToMe (talk) 22:23, 2 September 2024 (UTC)Reply

Please help me

edit

Hi. I wanna to link this article with it's persian translate (this article). But It's Wikidata's Page is locked. Can somebody help me with fixing this 2 link together to solve my problem? Hulu2024 (talk) 10:41, 3 September 2024 (UTC)Reply

  Done — Martin (MSGJ · talk) 10:59, 3 September 2024 (UTC)Reply

Have your say: Vote for the 2024 Board of Trustees!

edit

Hello all,

The voting period for the 2024 Board of Trustees election is now open. There are twelve (12) candidates running for four (4) seats on the Board.

Learn more about the candidates by reading their statements and their answers to community questions.

When you are ready, go to the SecurePoll voting page to vote. The vote is open from September 3rd at 00:00 UTC to September 17th at 23:59 UTC.

To check your voter eligibility, please visit the voter eligibility page.

Best regards,

The Elections Committee and Board Selection Working Group

MediaWiki message delivery (talk) 12:13, 3 September 2024 (UTC)Reply

Conflation

edit

These need help: Charles-Louis-Achille Lucas (Q19695615) Wolf Laufer (Q107059238) Fakhr al-Dīn Ṭurayḥī (Q5942448) RAN (talk) 08:54, 4 September 2024 (UTC)Reply

The date of death of Q19695615 has now been changed to 1905[1]; the source says 20th September - is there a reason it should be 19th? I reverted the recent additions to Q107059238 - I would have moved them to a new item, but one of the references for a 1601 date of death is claimed to have been published in 1600, and the links don't work for me ("The handle you requested -- 21.12147/id/48db8bef-31cf-4017-9290-305f56c518e9 -- cannot be found"). Q5942448 just had an incorrect date (1474 should have been 1674 - I removed it and merged with an item that already had 1674). Peter James (talk) 13:07, 4 September 2024 (UTC)Reply
Regarding Charles-Louis-Achille Lucas (Q19695615), the death certificate has been established on September 20, but the death happened the day before, on September 19. Ayack (talk) 14:46, 4 September 2024 (UTC)Reply
We have that happen with obituaries all the time, people add the date of the obituary rather than the date of death. --RAN (talk) 19:17, 4 September 2024 (UTC)Reply

Property for paused, interrupted etc.

edit

Trying to model "Between 1938 and 1941 it was reunited with Lower Silesia as the Province of Silesia" in Upper Silesia Province (Q704495). Is there any quantifier that says "not from ... to ...." or something? --Flominator (talk) 11:50, 4 September 2024 (UTC)Reply

Wikidata Query Service graph split to enter its transition period

edit

Hi all!

As part of the WDQS Graph Split project, we have new SPARQL endpoints available for serving the “main” (https://query-main.wikidata.org/) and “scholarly” (https://query-scholarly.wikidata.org/) subgraphs of Wikidata.

As you might be aware we are addressing the Wikidata Query Service stability and scaling issues. We have been working on several projects to address these issues. This announcement is about one of them, the WDQS Graph Split. This change will have an impact on certain uses of the Wikidata Query Service.

We are now entering a transition period until the end of March 2025. The three SPARQL endpoints will remain in place until the end of the transition. At the end of the transition, https://query.wikidata.org/ will only serve the main Wikidata subgraph (without scholarly articles). The query-main and query-scholarly endpoints will continue to be available after the transition.

If you know to want more this change, please refer to the talk page on Wikidata.

Thanks for your attention! Sannita (WMF) (talk) 13:41, 4 September 2024 (UTC)Reply

I would very much like to avoid a graph split. I have not seen a vote or anything community related in response to the WMF idea of splitting the graph. This is not a good sign.
It seems the WMF have run out of patience for this community to try to mitigate (e.g. by deleting the part of the scholarly graph not used by any other Wikimedia project) and thus freeing up resources for items that the community really care about and that has a use for other Wikimedia projects.
This is interesting. I view this as the WMF technical management team has in the absence of a timely response and reaction from the Wikidata community themselves decided how to handle the issues that our lack of e.g. a mass-import policy has created.
This sets a dangerous precedent for the future of more WMF governance which might impact the project severely negative.
I urge therefore the community to:
  • address the issue with the enourmous revision table (e.g. by suggesting to WMF to merge or purge the revision log for entries related to bots so that e.g. 20 edits in a row from a bot on the same date get squashed into 1 edit in the log)
  • immediately stop all bots currently importing items no matter the frequency until a mass-import policy is in place.
  • immediately stop all bots making repetitious edits to millions of items which inflate the revision table (e.g. User:LiMrBot)
  • immediately limit all users to importing x items a week/month until a mass-import policy is in place no matter what tool they use.
  • put up a banner advising users of the changes and encourage them to help finding solutions and discuss appropriate policies and changes to the project.
  • take relevant community steps to ensure that the project can keep growing in a healthy and reliable way both technically and socially.
  • assign a community liason that can help communicate with WMF and try avoid the graph split becoming a reality.
WDYT? So9q (talk) 09:36, 5 September 2024 (UTC)Reply
Also see
M2k~dewiki (talk) 09:42, 5 September 2024 (UTC)Reply
Also see
M2k~dewiki (talk) 09:57, 5 September 2024 (UTC)Reply
Regarding the distribution / portion of content by type (e.g. scholary articles vs. streets / architectal structure) see this image:
 
Content on Wikidata by type
M2k~dewiki (talk) 11:46, 5 September 2024 (UTC)Reply

Just to say that the problems with the Wikidata Query Service backend are being discussed since July 2021, and that the split in the graph has been introducted as a possibility in October 2023, and that we communicated about it periodically (maybe not to the best of our possibilities, for which I am willing to take the blame, but we've kept our communication open with the most affected users during the whole time).

This is not a precedent for WMF telling the community what to do, the community is very much in its own right to make all the decisions it wants, but we need to find a solution anyway to a potential failure of the Wikidata Query Service that we started analysing in mid-2021. I want to stress that the graph split is a technical patch to a very specific problem, and that in no way WMF or WMDE are interested in governing the community. Sannita (WMF) (talk) 13:02, 5 September 2024 (UTC)Reply

I understand, thanks for the links. I have no problem with WMF or any of the employees. I see a bunch of people really trying hard to keep this project from failing catastrophically and I'm really thankful that we still have freedom as a community to decide what is best for the community even when we seem to be on a reckless path right now.
What I'm trying to highlight is the lack of discussion about the growth-issue and how to steer the community to grow by quality instead of quantity overall. Also I'm missing a discussion and information e.g. to bot operators of the technical limitations that we have because of hard- and software and a governance that ensures that our bots do not break the system.
A perhaps horrifying example is the bot I linked above which makes 25+ edits in a row to the same items for millions of items potentially.
In that specific case we failed:
  • to inspect and discuss the operations of the bot before approval.
  • failed as a community to clearly define the limits for Wikidata so we can make good decisions about whether a certain implementation of a bot is desired (in this case make all the changes locally to the item, then upload = 1 revision).
Failings related to responsible/"healthy" growth:
  • we have failed as a community to ask WMF for input on strategies when it comes to limiting growth.
  • we have failed as a community to have discussions with votes on what to prioritize when WMF is telling us we cannot "import everything" without breaking the infrastructure.
  • we have failed as a community to implement new or update existing policies to govern the growth and quality of the project in a way that the community can collectively agree on effectively manages the issues the WMF have trying to tell us about for years.
We really have a lot of community work to do to keep Wikidata sound and healthy! So9q (talk) 17:21, 5 September 2024 (UTC)Reply
@So9q I see your points, and I agree with them. So much so, that I'm writing this message with my volunteer account on purpose and not my work one, to further stress that we need as a community to address these points. For what it's worth, I'm available (again, as a volunteer) to discuss these points further. I know for a fact that we'll have people who can provide us with significant knowledge in both WMF and WMDE, to take an informed decision. Sannita - not just another it.wiki sysop 17:53, 5 September 2024 (UTC)Reply
See this. There is yet another reason/thing to take in consideration: Most of Wikipedia language versions refrained from using Wikidata in infoboxes or articles. Doubting on Wikidata was some reason. Now as WP communities see that WD works and data can be used they will use WD more and more. For one example: We started to use Wikidata within the infobox for american settlements several items, e.g. time zone, FIPS and GNIS, inhabitants, area and several more. We might add telephone area code and ZIP code in the next future. Some of the are still crosschecking only if specific data in Wikidata and Wikipedia are the same but might switch on Wikidata only every time. All the language versions will make greater use of Wikidata in the future. If WMF tells us we're breaking the infrastructure they didn't do their job or did it wrong. Matthiasb (talk) 01:14, 6 September 2024 (UTC)Reply

Brazilian Superior Electoral Court database

edit

Hello everyone!

At Wikimedia Commons, I made a proposal of batch uploading all the candidates portraits from Brazilian elections (2004-2024). The user @Pfcab: has uploaded a big chunk and while talking with @DaxServer:, he noticed that since the divulgacandcontas.tse.jus.br has so much biographical data (example), it could be a relevant crosswiki project.

Would this be possible? There's any bot that could - firstly - look for missing names in the Wikidata (while completing all the rest), exporting the missing Superior Electoral Court biographical data, and adding the respective images found in Category:Files from Portal de Dados Abertos do TSE?

Thanks, Erick Soares3 (talk) 16:37, 4 September 2024 (UTC)Reply

symmetric for "subclass of (P279)" ?

edit

Hi, why is there not a symmetric "has subclass" for the property "subclass of (P279)" ? I contribute on social science concepts and it is honestly complicated to build concepts hierarchies when you are not able to see the "subclasses item" from the "superclass item" page. Making a property proposal is a bit beyond my skills and interests, anyone interested to look into the question ?

Thanks Jeanne Noiraud (talk) 17:37, 4 September 2024 (UTC)Reply

We're unlikely to ever make an inverse property for subclass of because there are items with hundreds of subclasses.
If you're using the basic web search you can search with haswbstatement. To see all subclasses of activity (Q1914636) you would search haswbstatement:p279=Q1914636. Or you could use the Wikidata class browser (Q29982490) tool at https://bambots.brucemyers.com/WikidataClasses.php . And finally you could use Wikidata Query Service at https://query.wikidata.org/. William Graham (talk) 18:26, 4 September 2024 (UTC)Reply
Thanks, the class browser is helpful ! The others are a bit too technical for me. Jeanne Noiraud (talk) 13:13, 6 September 2024 (UTC)Reply
Relateditems gadget is useful for a quick look of subclasses. Though sometimes it gets overcrowded if there's too many statements about an item. You can enable it here. Samoasambia 19:01, 4 September 2024 (UTC)Reply
Thanks, useful tool indeed ! Jeanne Noiraud (talk) 13:13, 6 September 2024 (UTC)Reply

Islamic dates versus Christian dates

edit

See: Ibrahim Abu-Dayyeh (Q63122057) where both dates are included. Do we include both or just delete the Islamic one? It triggers an error message. RAN (talk) 19:12, 5 September 2024 (UTC)Reply

@Richard Arthur Norton (1958- ): it's an hard question. We don't have a clear and easy way to indicated Islamic dates (which is a big problem in itself), right now the Islamic dates are stored as Julian or Gregorian dates, this is wrong and so they should probably (and sadly) be removed. Cheers, VIGNERON (talk) 12:30, 6 September 2024 (UTC)Reply
I deleted the Islamic date it was interpreted as a standard AD date and was triggering an error message. --RAN (talk) 00:25, 7 September 2024 (UTC)Reply

Question about P2484

edit

Dear all,

Please see this question about property P2484: Property_talk:P2484#Multiple_NCES_IDs_are_possible

Thank you WhisperToMe (talk) 21:09, 5 September 2024 (UTC)Reply

Merging items (duplicates)

edit

Hello, could someone please merge the following items or explain to me how I can do this? I have tried it via Special:MergeItems, but it won't load for me (the gadget is already enabled in the preferences). The entries are duplicates.

Thanks Аныл Озташ (talk) 22:58, 5 September 2024 (UTC)Reply

Merge?

edit

These are the same supercomputer with performance measured at different times, perhaps with slight mods at each performance measurement, should they be merged? Ranger (Q72229332) Ranger (Q73278041) Ranger (Q72095008) Ranger (Q2130906) RAN (talk) 23:45, 5 September 2024 (UTC)Reply

Yes. I used it Vicarage (talk) 04:35, 6 September 2024 (UTC)Reply
Not entirely sure but probably yes, at least some of them (maybe not the last one?) ; and if not merged, these items should more clearly differentiable. It needs someone who understand exactly what this is about. The first three were created the bot TOP500_importer maybe Amitie 10g can tell us more. Cheers, VIGNERON (talk) 12:16, 6 September 2024 (UTC)Reply

How to add units to a thermal power plant?

edit

I think I asked this question before but if so it was so long ago I have forgotten the answer.

In a thermal power plant, such as a coal-fired power plant, the number of units and their capacity in megawatts are very basic pieces of information. For example https://globalenergymonitor.org/projects/global-coal-plant-tracker/methodology/ “ database tracks individual coal plant units”.

I am writing on Wikipedia about coal-fired power plants in Turkey and I pick up the infobox data automatically from Wikidata. At the moment I am editing https://en.wikipedia.org/wiki/Af%C5%9Fin-Elbistan_power_stations. Ideally I would like the infoboxes to show that the A plant has 3 operational units of 340 MW each and one mothballed unit of 335 MW all of which are subcritical and 2 proposed units each of 344 MW, and that the B plant has 4 units each of 360 MW all operational.

If that is too ambitious just the number of units would be a step forward, as shown in infobox params ps_units_operational, ps_units_planned etc. Is that possible? Chidgk1 (talk) 09:09, 6 September 2024 (UTC)Reply

https://www.wikidata.org/wiki/Q85967587#P2109 ? Bouzinac💬✒️💛 15:29, 6 September 2024 (UTC)Reply
And you might use this query to list power wikidata items by MW power : https://w.wiki/B7U9 Bouzinac💬✒️💛 19:27, 6 September 2024 (UTC)Reply
@Bouzinac Thanks for quick reply but I don’t quite understand - maybe my question was unclear? Chidgk1 (talk) 08:18, 7 September 2024 (UTC)Reply

Merging items (duplicates)

edit

Can someone please merge these two pages, since they are about the same hospital. The current name of the hospital is AdventHealth Daytona Beach.

Florida Hospital Memorial Medical Center (Q30269896) to AdventHealth Daytona Beach (Q130213551)

Catfurball (talk) 20:25, 6 September 2024 (UTC)Reply

How best to model a long development project for a property

edit

For Harbor Steps (Q130246591), [2] page 7 gives a good table summarizing the process of assembling the land, planning, and construction, especially of the dates associated with three different phases of work. I imagine this can be appropriately modeled using existing properties, but I do not know how, not a sort of thing I've ever seen modeled here. - Jmabel (talk) 20:33, 6 September 2024 (UTC)Reply

KUOW

edit

KUOW (Q6339681) is just a separately licensed transmitter for KUOW-FM (Q6339679). No programming content of its own, really just a repeater. I suppose it still merits a separate item, but I suspect the two items should somehow be related to one another, which they seem not to be currently. - Jmabel (talk) 05:42, 3 August 2024 (UTC)Reply

(Reviving the above from archive, because no one made any suggestions on how to do this. - Jmabel (talk) 20:36, 6 September 2024 (UTC))Reply

Q107019458 and Q924673 merge?

edit

The New York Herald (Q107019458) and New York Herald (Q924673) or separate incarnations? RAN (talk) 00:24, 7 September 2024 (UTC)Reply

The different identifiers are because the New York Herald was combined with the New York Sun to form The Sun and the New York herald (Q107019629) from February to September 1920 (https://www.loc.gov/item/sn83030273). The New York Herald (Q107019458) is from October 1920, when it became a separate newspaper again, to 1924. I don't know if the P1144/P4898 identifiers and 8-month gap are enough for separate items. Peter James (talk) 11:03, 7 September 2024 (UTC)Reply