Wikidata talk:Lexicographical data/Archive/2019/07

This page is an archive. Please do not modify it. Use the current page, even to continue an old discussion.

Help needed on Property talk:P6712

Hi y'all,

Following the previous section #Again letters, I had a question about the datatype of P6712 (P6712) that I asked there « why is the datatype lexeme and not item »? @Jura1: is responsible for choosing (pushing) this datatype and for creating the property but they can't or won't answer (the discussion now revolve a straw man argument and is going nowhere). @Liamjamesperritt, ArthurPSmith: seems to share my question. So, can maybe someone help me understanding why this property should have the datatype lexeme and not item? and if not, what should be done about it.

Cheers, VIGNERON (talk) 16:29, 29 June 2019 (UTC)

I've been thinking about this a bit more. While I still think item datatype would be better for P6712 (P6712), I can actually see a purpose in having lexeme entries for the letters of the alphabet - and for letters of other alphabets too - because they do actually occur in the spoken language when people discuss spelling! So it's useful to have, for example, IPA transcription and sound properties attached to the letters for a given language, to show how those letters are pronounced within that language. They probably should also have spellings (forms) that are different from the single character glyph, to represent how the name is pronounced. For example in English, "aleph" for א is clearly useful as a spelling; should we also have 'double-u' for 'w'? Anyway, as entities of the language, letters can certainly legitimately be lexemes, and I was wrong to argue otherwise earlier. ArthurPSmith (talk) 14:36, 1 July 2019 (UTC)
Do you think we can have א as English lexeme? Because simply "aleph" as English lexeme is indisputable. --Infovarius (talk) 09:05, 3 July 2019 (UTC)

Different words for males, females etc.

Hi, it is not unusual that languages use different words for animal males, females and young animals. The same apply for occupations etc. Those have generally the same concept in Q-items. Is it correct to use quantifiers like in vlčice (L51702) or vlče (L51705) in such cases? Or should such thing be modelled another way? (In that case how?)--Lexicolover (talk) 21:42, 29 June 2019 (UTC)

It makes sense to me. Pamputt (talk) 08:29, 30 June 2019 (UTC)
Several stuffs :
  • it seems that this is a designation for animal of different species, it may be cool to directly link the common name to the species in different senses.
  • An alternative that may work, instead of using a vague qualifier like « applies to », use several « item for that sense » statement, like
   Sense:
       item for this sense (P5137)   wolf (Q18498)     
       item for this sense (P5137)   adult (Q80994)     
       item for this sense (P5137)   female organism (Q43445)     
with the convention that all the items in a sense are true at the same time, so for example this sense meaning would be « female adult wolf ».
Why would we use « applies to » in such a case ? author  TomT0m / talk page 20:32, 30 June 2019 (UTC)
@TomT0m:: Actually yes, it is for several species. Czech language generally use binomial names for organisms just like it is with scientific names. It should be better to link to the exact species (well in most cases it is used for few species native in the region) but at the same time it would make it almost unusable for genuses with tens or hundreds of species and few other non-organism senses.
Thank you for your suggestions, I really appreciate it. I've tried to find the most simple and general solution. I don't know if applies to part (P518) is good or not, it was closest to what I looked for. I can't use sex or gender (P21) since I need more point of views other than sex (even though it will probably be the most used). I've seen discussions about ethymology properties, from those I thought it is prefferable to go with minimum number of properties (because of querying and external tools). Combining several items is the solution I did not think about. On the other hand it seems to me much more difficult for human reading.--Lexicolover (talk) 21:29, 30 June 2019 (UTC)
@Lexicolover: corrected the illustration of combination of multiple « item for this sense » as it was all messed up.
« I can't use sex or gender (P21) since I need more point of views other than sex » Well, why not using other properties for those other « point of views » ? author  TomT0m / talk page 07:19, 1 July 2019 (UTC)
(also, I remembered creating a long time ago an item female wolf (Q27929033) for an experiment, that is a solution also. Anyway I don’t think we want that in different part of Wikidata we use different ways to express the simple thing « female wolf ». author  TomT0m / talk page 07:44, 1 July 2019 (UTC)
I thought it is desired to use the minimum posible properties. What do I get for linguistic description by using more properties to just say "it applies only to"? And what property should I use for example for those young animals?--90.179.116.138 19:38, 1 July 2019 (UTC)
@TomT0m: I suppose there should be single item for this sense (P5137) for single Sense. Here we can make a new property like "sense combines topics" similarly to category combines topics (P971). --Infovarius (talk) 09:09, 3 July 2019 (UTC)
@Infovarius: In that case I’d suggest something more precise, in the spirit of union of (P2737)   and disjoint union of (P2738)  , a property like « denotes instances of all »
 Sense:
    denotes instances of all the classes: list of values as qualifiers (Q23766486)     
       of (P642)   wolf (Q18498)     
       of (P642)   adult (Q80994)     
       of (P642)   female organism (Q43445)     
(if wolf (Q18498)      is the class of all wolves, female organism (Q43445)      the class of all female organisms and adult (Q80994)      the class of all adults organisms, then this sense denotes the organisms which are instances of all of them, an instance of their intersection in more mathematical words). author  TomT0m / talk page 09:56, 3 July 2019 (UTC)

Bugfix: constraint that was removed from property but still displayed on Lexeme

Hello all,

Last week we fixed a bug that some of you noticed: a constraint was removed from property was still displayed on Lexemes. If you spot any remaining issues on this topic, please let me know or let comment in this ticket. Cheers, Lea Lacroix (WMDE) (talk) 16:43, 1 July 2019 (UTC)

Shaping language variants occurrences

Hello! In some days we will have a really big bunch of nouns in Basque uploaded as Lexemes in Wikidata. Every word has 46 forms, and they will have also their senses. This can help us to make some good experiments using Basque words as a base, for example in Wiktionary. But we have started thinking on future developments based on this words, and we have thought on the project Ahotsak that records people talking about many things and then they make exact phonetic transcriptions of what is being said.

Basque is a language with lots of variants and subvariants, but the words are standarized. Let's say aita (L49255), the word for father. This is written as aita, but can be pronounced as aitá, áita, aitte, atxa... this subvariants can be well formatted on Wikidata, and even add an audio. But, we can also provide where this word has been recorded (with coordinates or, most commonly, locality), so we can build, in the future, isoglosses with this information. How can we model this location information, so we have in the future phonetic and written testimony of the variants' extension? -Theklan (talk) 14:38, 28 June 2019 (UTC)

Hi Theklan !
Great idea!
First, I would say that each variant deserve to have it's own separate form.
You can start by looking at mądry (L24242) (140 forms, our record so far and with several audio files).
For the precise localisation, I don't know (it has never been done as far as I can tell). Plus, I'm wondering: shouldn't it be done on the Wikimedia Commons side ? (not sure it's enough).
The only case I see where something like that is explicitly indicated is if the variant is specific to a dialect (or any lect for that matter). Then you can use this lect in the Spelling variant, for instance eu-x-Q17354876 for Q17354876.
Cdlt, VIGNERON (talk) 15:48, 28 June 2019 (UTC)
If it's spelled the same then I don't think different forms are warranted. If the dialect/variant is named or somehow conceptualizable as an item, then we have pronunciation variety (P5237) which can be attached as a qualifier to the IPA transcription (P898) or pronunciation audio (P443) statements on the form, and the geographic coordinates etc. can be attached to the item. If that doesn't really make sense, then I think you can just add additional qualifiers to the pronunciation audio (P443) etc. statements on the form. ArthurPSmith (talk) 15:54, 28 June 2019 (UTC)
@VIGNERON: I'm not talking about forms, but about pronouncing variants, that are not officialy coded but exist. In the example I give, aita (L49255), you have all the forms stated, but the pronunciation of most of this forms will vary depending on the place. Take for example oui (L9089). In most places (afaik) it is pronuncied [wi], but you know that is not uncommon to hear it as [we] or even [ue]. The word is /oui/, but the pronunciations can be geographically shaped without being real variants or forms. In Basque this is very evident: you can say where a speaker cames from if instead of pronouncing [etxe] (house) pronounces [etxí]. But you can also hear, for the definite form etxea the pronunciations [etxea], [etxie], [etxia], [etxiya] or [etxiye]. And this can't be subforms of the definite form, but can definitively be shaped as data: (someone) can write its pronunciation, we can have separate audio files and we can shape a place for the recording. The issue is: how can we shape this in a perfect way so it is not only for Basque language?
@ArthurPSmith: Indeed, we can shape it as language variants, but this variants could be too much, as the dialect can be named but it's not something official. -Theklan (talk) 19:20, 30 June 2019 (UTC)
@Theklan:: Hello, I would do it (and as I understand VIGNERON suggest almost identical way) as following:
FORM: etxea (gramatical features)
      STATEMENT: <IPA>: [etxea]
                 QUALIFIER: <pronunciation variety>: dialect 1 item
      STATEMENT: <IPA>: [etxie]
                 QUALIFIER: <pronunciation variety>: dialect 2 item
                 QUALIFIER: <pronunciation variety>: dialect 3 item
      STATEMENT: <IPA>: [etxiya]
                 QUALIFIER: <pronunciation variety>: dialect 4 item
I think that would be correct approach. It might be tricky for languages that do not have well described dialects, but there still should be possible to just use region instead of dialect. But the issue I see is that you might be doing (your own) research in this area, so you would be getting lot of data here. Wikibase is great tool to analyse data but I am not so sure if Wikidata is so great in this area (well it depends on what we expect from it, so I am not against it).--Lexicolover (talk) 19:55, 30 June 2019 (UTC)
@Theklan: oh, my bad, since you write it down I thought it was spelling variants too and no just pronunciation variant (I'm probably biased by Breton here). In that case, I would put the several pronunciation in several statement of the same form with the precision in qualifier. And you raise a good question, it should be consistent for all languages but I'm not sure we actually have a definitive structure for that (and it make me think that mądry (L24242) may not correct as with several pronunciations it wouldn't be clear which statement refers to which pronunciation, Lexicolover proposal sounds better, ping @KaMan: what do you think?). So thanks again for raising the question but sadly I don't have a definitive answer :/ But hopefully, the community will soon agree on it ;) Cheers, VIGNERON (talk) 20:00, 30 June 2019 (UTC)
@Lexicolover: This proposal sounds GREAT! -Theklan (talk) 20:44, 30 June 2019 (UTC)
@Theklan: good to see that Basque content will increase. One question, how to you plan to add this "really big bunch of nouns"? Do you plan to do it by hand and did you code a bot for that? In the second case, could you run your bot on few example to evaluate what we will get at the end? Thanks in advance. Pamputt (talk) 08:58, 29 June 2019 (UTC)
@Pamputt: The data is being uploaded by Elhuyar Fundazioa using a bot. They have been evaluated and aceepted for that. -Theklan (talk) 19:20, 30 June 2019 (UTC)
@Theklan: Ok, what has been done by Elhuyar_Fundazioa looks fine because it concerns only Form. However, I wonder about the Senses because they are copyrighted data. I would like to be sure that the Elhuyar Dictionary is licenced under CC0 or equivalent. It does not seem to be the case according to this page that says that the data are licenced under CC by-nc-nd. Pamputt (talk) 20:11, 30 June 2019 (UTC)
@Pamputt: They are uploading it within an agreement with the user group, so yes, now the data will be under cc0, and they have inserted a link to their dictionary, so we can mutually benefit from each other (they provide translations and soon they could take also images from Commons to illustrate their dictionaries). In the same way, magic (L3) has a link to OED, that is not free -Theklan (talk) 20:21, 30 June 2019 (UTC)
@Theklan: I have no doubt that you are working with them. My point is if we start to upload data from their dictionary, then they should update their licence to CC0 otherwise it is a licence violation. Or maybe they could send a ticket to OTRS in order to say officially they release their dictionary under CC0. About magic (L3), as far as I know, this is different case because no data of this lexeme come from OED (this is only a link). Pamputt (talk) 20:58, 30 June 2019 (UTC)
@Pamputt: I think we are mixing their multilanguage dictionary (which is under cc-nc-nd) and the definitions, which are not covered in this online dictionary they are linking, and will be uploaded as senses but are not online there with that license. -Theklan (talk) 21:13, 30 June 2019 (UTC)
@Theklan: sorry if I mix the multilanguage dictionary and the definitions. This is indeed the case. So, the question becomes, where does the definition come from? Even if they are offline, there is a licence on them (the same as for the paper dictionary) so I would like to be sure that they are licenced under CC0. Is there any "proof" somewhere (a simple email from Elhuyar Fundazioa to OTRS should be enough)? Just to be sure I understand correctly, is there already a Basque lexeme with one definition? Pamputt (talk) 21:29, 30 June 2019 (UTC)

Ok @Pamputt:! I have written them so they can say something here or take action. It will take some days, though. -Theklan (talk) 08:44, 1 July 2019 (UTC)

FYI, if there is the need to authenticate data providers and to state that the data is release under CC0, you can use the OTRS queue at info wikidata.org. Lea Lacroix (WMDE) (talk) 08:23, 7 July 2019 (UTC)

New user script to simplify adding forms on lexemes

Hi everyone! I’ve written a user script (documentation) to make it easier to add Forms to Lexemes that don’t have any Forms yet: when you view a Lexeme without Forms, it will determine the matching template(s) of the Wikidata Lexeme Forms tool and add links to them below the regular “add Form” link (see the announcement tweet for screenshots). I hope that this will be useful to some of you! --Lucas Werkmeister (talk) 13:06, 8 July 2019 (UTC)

Again letters

I propose to merge a (L20817) and a (L45484). It is a letter of the same script, independently of language. And we can use "multiple languages" or similar instead of language in the headings. @Airon90, Liamjamesperritt, Jura1: any objections? --Infovarius (talk) 20:36, 21 June 2019 (UTC)

Bot creation to move lexicographical data

Hello.
I just started working to create a Bot for Wikidata which would be able to introduce a part of the lexicigraphical data from Lo Congrès online dictionary to Wikimedia.
The project concerns the Lexemes from 3 languages (French, Occitan Lengadocian and Occitan Gascon) and will add, in the first time, the Lemmes, the forms, the translation relationships between words and the variants relationships. This Bot will take the data from a .csv file and create new Lexemes from it.
I also would like to share the code with anyone interested in, so I am trying to make this Bot reusable for others languages.
I am just at the beginning of this project but I first wanted to introduce myself and the project to you. --Aitalvivem (talk) 18:36, 3 July 2019 (UTC)

I'm not entirely sure, but it looks like this data is provided under a CC-BY license from this page. If you can get in touch with the owners of the content you should probably check that they are ok with your importing this data into Wikidata. From previous discussions of such sources, it seems clear that at least definitions (for senses) could not be imported by a bot, without further clarity on the license. ArthurPSmith (talk) 22:09, 3 July 2019 (UTC)
Yes, Wikidata is licenced under CC0, so all the data imported into it has to be CC0, at maximum, as well. To have an idea of what is copyrithable in lexicographical data, you can read this legal analysis by a lawyer from the WMF. Pamputt (talk) 05:36, 4 July 2019 (UTC)
@ArthurPSmith @Pamputt Indeed some dictionarys used by Lo Congrès are provided under a CC-BY license but some others are free (3 dictonarys). I am working for Lo Congrès so we are aware of this constraint and we will only import free data to Wikidata.--Aitalvivem (talk) 13:03, 4 July 2019 (UTC)
Welcome Aitalvivem/AitalvivemBot :)
If we need to have an official statement from the organization at some point, to state that the data is released under CC0, we can use the Wikidata OTRS queue at info wikidata.org. Lea Lacroix (WMDE) (talk) 15:13, 4 July 2019 (UTC)

Hello, I wrote a function to create a Lexeme but when I try it on the test environment I always get the same error :

{
    "error": {
        "code": "failed-save",
        "info": "The save has failed.",
        "messages": [
            {
                "name": "wikibase-api-failed-save",
                "parameters": [],
                "html": {
                    "*": "The save has failed."
                }
            }
        ],
        "*": "See https://test.wikidata.org/w/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes."
    },
    "servedby": "mw1287"
}

Any idea of how I could fix it ?--Aitalvivem (talk) 09:23, 8 July 2019 (UTC)

@Aitalvivem: Can you show us the request you send (without the edittoken, that is private), that led you to this? --Hoo man (talk) 09:59, 8 July 2019 (UTC)
@Hoo man:Of course, here is an exemple of a request generated by my code :
	{
		'action': 'wbeditentity',
		'format': 'json',
		'new': 'lexeme',
		'token': CSRF_TOKEN,
		'data': {'labels':{'ostal':{'type':'lexeme', 'lemma':'ostal', 'language':'oc'}}}
	}

You can find my code here https://github.com/aitalvivem/AitalvivemBot --Aitalvivem (talk) 10:14, 8 July 2019 (UTC)

@Aitalvivem: isn't labels a data for items only? Not sure (nor the error message, nor wbeditentity doc is clear) but maybe you can try with lemmas instead? Cdlt, VIGNERON (talk) 08:50, 10 July 2019 (UTC)
@VIGNERON: I tried to change "labels" for "lemmas". I had some clear errors messages so I had to adapt a few parameters in my request. But now I am stuck on the same error message again :/
here is my new request :
	{
		'action': 'wbeditentity',
		'format': 'json',
		'new': 'lexeme',
		'token': CSRF_TOKEN,
		'data': {'lemmas':{'oc':{'type':'lexeme', 'lemma':'ostal', 'value':'ostal', 'language':'oc'}}}
	}
I tried without the 'type' and 'lemma' parameters but every time the API answer is the same "failed-save" error message.--Aitalvivem (talk) 12:25, 10 July 2019 (UTC)
@Aitalvivem, VIGNERON: Here's the content of data that produced ljusgul (L54797). Note that you have to include the Q-id of the language and lexical category. --Vesihiisi (talk) 12:43, 10 July 2019 (UTC)
{
  "type": "lexeme",
  "lemmas": {
    "sv": {
      "language": "sv",
      "value": "ljusgul"
    }
  },
  "language": "Q9027",
  "lexicalCategory": "Q34698",
  "forms": [
    {
      "add": "",
      "representations": {
        "sv": {
          "language": "sv",
          "value": "ljusgul"
        }
      },
      "grammaticalFeatures": [],
      "claims": []
    }
  ]
}
@Vesihiisi:Thank you very much, that was exactly what I needed !--Aitalvivem (talk) 15:39, 10 July 2019 (UTC)

@Aitalvivem: I have deleted over 80 items you created because they did not comply with our notabilty policy (like Q65316588 "Pronom possessif 3e personne du singulier", Q65295676 "Pronom personnel réfléchi tonique 1ere personne du pluriel", or Q65247119 "Adjectif masculin pluriel"), which suggest that you are not familiar with how Wikidata is modelled nor with our policies. Most of the deleted items were created in one single day, which makes me think that you were on a rush. Also, I have performed several merges of items that you created as duplicate of already existing items. Wikidata's learning curve is not insurmountable, but it does take time. I bet you mean well and the task you are proposing seems to have great potential; however, at this point I think it would be unwise that you engage in mass edits. Please take some more time to get to know Wikidata. As you can see, there will always be plenty of people willing to answer any doubts, but you do need to ask for help when needed/unsure. Andreasm háblame / just talk to me 06:08, 23 July 2019 (UTC)

@Andreasmperu:Hello, I'm sorry for your troubles. I was trying to add the missing lexical category that I will use to insert the Lexeme with my Bot. Now I have modified my file which convert lexical category into items id to only use item that already exists in Wikidata. Just to be sure, could you tell me what was the problem with those items ? I thought they would meet the third criteria of the notability policy but I may have misunderstand it.Aitalvivem (talk) 09:38, 23 July 2019 (UTC)
@Aitalvivem: the "good" way is to add the categories in several parts. For instance singular (Q110786) + masculine (Q499327) is enough, there is no need for "singulier masculin" (and it's often easier to query afterwards). That said, it's not always clear nor possible (I still don't know for sure how to model some lexical categories like "plural of plural" for lagad (L114) here a new item is probably needed ; I've been here for almost 7 years and I think know Wikidata but still not always sure @Andreasmperu:), so don't hesitate to ask, indeed there is a lot of people you can help here  . Cdlt, VIGNERON (talk) 16:53, 23 July 2019 (UTC)
@VIGNERON: I understand that it is better to add the categories in several parts (and that is what I do for grammaticals categories) but my purpose here was to be more specific : as I saw it singular (Q110786) + masculine (Q499327) represents the form masculine singular of a Lexeme. But "singulier masculin" represents a Lexeme that only exists as a masculine singular (no feminine, no plural) so this is written as the nature of the Lexeme in the Lo Congrès database. There can be only one item linked to the nature of a Lexeme so I thought it was a good idea to add those categories.--Aitalvivem (talk) 08:14, 24 July 2019 (UTC)

Issues I have encountered (so far)

As pointed out by @VIGNERON: despite all the years spent here, we still hesitate at times. I was planning to create all Spanish demonyns and during that task I came across the following issues, which I hope somebody have solved or we can try to work them out:

  1. I have not found a way to enter two spelling variations in the same language, so I had to create two different lexemes (for example, austriaco (L55510) and austríaco (L55511)). In Spanish, there are several cases where the same word can be spelled with or without an accent. Since there is no preference for either of them, they are consider to be one word and, thus, only have one dictionary entry. Because of that, I thought it would be reasonable to have both of them together in one single lexeme. Unfortunately, when I tried to add a spelling variant as a lemma or as a form, I've got the same message: "It is not possible to enter multiple representations with the same spelling variant." I have linked both lexemes with synonym (P5973), but this does not capture the idea that they can be used interchangeably.
  2. I have not found a way to establish that one lexeme is preferred over another one. For instance, azerbaiyano (L55512) is the preferred demonyn, but azerí (L55514) is also a recognised valid one. This could be the case for the whole lexeme or for only a specific sense, so I would like to find a way to express that.
  3. The list of lexical categories currently being used is way too long, so I think it would be a good idea to establish some guidelines. For instance, pronoun (Q36224) can be the lexical category, and then a more specific form of pronoun (like personal pronoun (Q468801)) can be added with instance of (P31). There is mw:Extension:WikibaseLexeme/Data Model, and Wikidata:Lexicographical data/Layout (marked as a work in progress), so it can get a bit confusing.
  4. Do we have any showcase lexemes anywhere? I have not been able to find them, and they would be a great training for new editors, and for the not so new too. Right now, the learning process it is not straightforward.
  5. And lastly, grammatical features. As with lexical categories, I think it would be best to establish a set number of options. However, I have come across some tricky ones. For instance, the second and fourth form of mío (L56587) (míos and mías in Spanish; miens and miennes in French) referred to first-person singular (Q51929218), but to be used when the object is plural (Q146786). So first-person singular (Q51929218) turned out to be a useful item, but I am still unsure if it is a clear representation.

I could really use some help regarding any of these issues, so I'll be reading you all. Andreasm háblame / just talk to me 00:15, 26 July 2019 (UTC)

Hi,
I'll try to anwser (that are just my quick answer, probably not the perfect answer ;) ) :
Cdlt, VIGNERON (talk) 07:30, 26 July 2019 (UTC)
I think I agree mostly with VIGNERON here. On question 2 - I've been thinking it would be nice to be able to have a "frequency" property for lexemes, to indicate how often they appear in a reference collection of writings; then the "preferred" lexeme would be the more frequently used one, in general. But that probably needs a bunch of work to model and figure out good sources for... ArthurPSmith (talk) 17:09, 26 July 2019 (UTC)
I disagree slightly with @VIGNERON:. I'll use instance of (P31), but not as a substitute to the word class property. (i.e. instance of disputed/nonstandard usage is certainly something I'd use). SOmetimes there is no good target for has characteristic (P1552), so instance of (P31) has to substitute (cf. above regarding invariable nouns)
Vigneron, you do know that attempting t enter two spelling variation for the same language is impossible and triggers an error message, right? Circeus (talk) 22:55, 26 July 2019 (UTC)
@Circeus: thanks for your slight dissent, it sounds interresting but I'm not sure to understand what your saying about instance of (P31), could you give an exemple. And yes, I obviously know, but my remark still stand, doesn't it? Cheers, VIGNERON (talk) 06:52, 27 July 2019 (UTC)
instance of (P31) substituting for... language style (P6191) (late (L8)), etymological/method of derivation info (lügenial (L1841)), has characteristic (P1552) (pige (L1113), but there's probably a certain debate to be had about what can and cannot be used for Property:P1552), more general classificatory information (rot (L818), Pâques (L26046)). With the exception of the third case, all of those would be entirely unfit for the "lexical category" aspect of the Lexeme entry, much less has characteristic (P1552). The first instance is a legacy use from before language style (P6191), but that property is not going to be easy to find if you don't know about it already, so I'd expect plenty of it. Circeus (talk) 18:39, 27 July 2019 (UTC)
Return to the project page "Lexicographical data/Archive/2019/07".