![]() |
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2023. |

Welcome to Wikidata, Epìdosis!
Wikidata is a free knowledge base that you can edit! It can be read and edited by humans and machines alike and you can go to any item page now and add to this ever-growing database!
Need some help getting started? Here are some pages you can familiarize yourself with:
- Introduction – An introduction to the project.
- Wikidata tours – Interactive tutorials to show you how Wikidata works.
- Community portal – The portal for community members.
- User options – including the 'Babel' extension, to set your language preferences.
- Contents – The main help page for editing and using the site.
- Project chat – Discussions about the project.
- Tools – A collection of user-developed tools to allow for easier completion of some tasks.
Please remember to sign your messages on talk pages by typing four tildes (~~~~); this will automatically insert your username and the date.
If you have any questions, don't hesitate to ask on Project chat. If you want to try out editing, you can use the sandbox to try. Once again, welcome, and I hope you quickly feel comfortable here, and become an active editor for Wikidata.
Best regards! --Tobias1984 (talk) 12:08, 8 September 2013 (UTC)
![]() |
Massimo Fabrizi Edit
Hi Epìdosis, there are two persons with this name: Massimo Fabrizi (Q110979935) (b. 1969), editor, and Massimo Fabrizi (Q110980016), poeta romanesco e musicista. The last one published Aldo Fabrizi, mio padre. He is older and better known (since he had a famous father) but I can't find any authority file for him. Can you check? I would like to create a GND for him. --Kolja21 (talk) 20:18, 11 August 2023 (UTC)
- @Kolja21: Checked; as previewable, only SBN can solve it (as usual: authors of a given country should be handled well firstly by their national library/network of libraries) ... I will do it, usually changes become visible each Monday or Tuesday - but in August it's far from sure, I would say it will be surely visible in September. The update status is shown in the homepage https://opac.sbn.it/, under "ultimo aggiornamento", and currently is 24/07/2023. --Epìdosis 21:34, 11 August 2023 (UTC)
Mythoskop Edit
Ciao Epìdosis, I have a question: Can you make it so the MnM catalogue for Mythoskop (no. 6010) shows up in this category: https://mix-n-match.toolforge.org/#/group/ig_wikidata_property_related_to_greek_mythology ?
Cheers, Jonathan Groß (talk) 18:39, 26 August 2023 (UTC)
- @Jonathan Groß: I thought that the group function worked on the basis of the instances of the properties; so, considering Mythoskop ID (P11946)instance of (P31)Wikidata property related to Greek mythology (Q64295974), it should be there, and it's unclear to me why it isn't there :(
- But: considering that the main utility of the group functions is the "top missing", you can just add Mythoskop manually to the list, so having https://mix-n-match.toolforge.org/#/top_missing/371,108,6010,6028. Good night, --Epìdosis 22:24, 26 August 2023 (UTC)
Yes, I saw this yesterday for the first time: When I created the catalogue for the MANTO database, it automatically applied the category as well as the catalogue description from the property I entered. This had not worked earlier when I created the Mythoskop catalogue. Perhaps the server was too swamped at the time then. Anyway, many thanks for your clarification. I guess Magnus is the only one who can fix it. And he is already looking into the Mythoskop catalogue.
Have a splendid Sunday! Jonathan Groß (talk) 06:32, 27 August 2023 (UTC)
Ciao Epìdosis,
the Mythoskop catalogue finally shows all 2080 rows of data I fed it, but now it doesn't display links to the website for some items. I asked Magnus about it but no answer. I have a vague hope that it might work on a new catalogue. Could you kindly delete the old one, no. 6010? Jonathan Groß (talk) 18:47, 23 September 2023 (UTC)
- @Jonathan Groß: before deleting I tried updating the formatter URL, could you check if it solved the issue? Otherwise, of course I can delete https://mix-n-match.toolforge.org/#/catalog/6010. Good evening, --Epìdosis 18:51, 23 September 2023 (UTC)
Apparently it has. Thank you so much! :D You're the best! Jonathan Groß (talk) 18:53, 23 September 2023 (UTC)
Daphnis Edit
Ciao Epìdosis,
while matching MANTO to Wikidata I've come across two items Daphnis (Q13406785) and Daphnis (Q741627). They had previously been merged by User:Iwbrowse in 2015 but in 2021, you separated them again with a mutual different from (P1889) statement.
Frankly, I don't understand why. Daphnis (Q13406785) has the same statements as Daphnis (Q741627) (or at least some of them), but the latter is more detailed and has wikilinks and external IDs connected to it (which the former item does not). Couldn't we just merge the two items again?
Best wishes from me, birthday boy, Jonathan Groß (talk) 09:42, 30 August 2023 (UTC)
- Hi @Jonathan Groß:! I probably did it on the basis of some Wikipedias (it.wp, en.wp, es.wp) which didn't identify the Daphnis of Longus with the mythical character (without clear sources in fact); Brill's New Pauly identifies them, but in old Pauly-Wissowa I read the phrase "Vielleicht ist Longus auch durch die Ὀαριστύς angeregt, deren D. mit dem Helden der Sage allerdings kaum mehr als den Namen gemein hat". I have now transformed different from (P1889), which was surely excessive, into said to be the same as (P460), but I remain doubtful about merging them completely. What do you think? --Epìdosis 10:00, 31 August 2023 (UTC)
I think this needs looking into. The Daphnis alluded to in Knaack's RE article is a character in an eidyllion by Theocritus, and Knaack denies identification with the Daphnis in Longus' Novel. I will do some research and get back to you. Cheers, Jonathan Groß (talk) 11:24, 31 August 2023 (UTC)
Hi. I've added more specific descriptions to the items involved and created a set index item as well. I think you were right in distinguishing the two items. According to Knaack (pace Reitzenstein), Daphnis (Q741627) was the inspiration for Longus's character Daphnis. However, I do not consider Daphnis (Q13406785) and Chloe (Q19861876) to be part of Greek mythology as they are clearly inventions by Longus (of course, quite possibly inspired by mythology). I have therefor removed the statement "instance of: mythological Greek character" in both items to avoid further confusion. I hope this is acceptable. Cheers, Jonathan Groß (talk) 07:38, 5 September 2023 (UTC)
- @Jonathan Groß: it seems perfect to me! Thanks, --Epìdosis 17:17, 5 September 2023 (UTC)
Jan {Hendrik, Hermanus} van der Merwe Edit
Hello Epidosis,
I'd like to ask about an old merge of Q24039857 and Q102310304. Are you sure about it? The Wikipedia page of Johannes Hendrik van der Merwe (a materials scientist) does not mention a PhD in Leiden in 1952. Moreover, there's a Geni profile (Geni - Johannes Hermanus van der Merwe (1923-1977)) that belongs to a theoretical physicist attending University of Leiden. The birth and death dates differ between these two scientists.
Thank you, (~~~~) PKalnai (talk) 19:09, 1 September 2023 (UTC)
- @PKalnai: thanks for noticing, the Geni.com profile is decisive, and also other sources are convincing. Merge undone and restored Johannes Hermanus van der Merwe (Q102310304)different from (P1889)Jan H. van der Merwe (Q24039857). --Epìdosis 19:39, 1 September 2023 (UTC)
August Wilhelm von Hof(f)mann Edit
Hello Epidosis,
I'd like to discuss the merge of Q76360 and Q102077512. There's the dissertation of August Wilhelm Hoffmann, with the family name having the double 'f', Hoffman, available online: ia800202.us.archive.org/29/items/b22477032/b22477032.pdf On page 29, there's a Vita section stating that Hoffmann was born on 26th May 1818 in Erfurt (Erfordia in Latin), his father was Ludwig Johann (Ludovicus Joannes in Latin) and matriculated in 1839 (MDCCCXXXIX). However, the Wikipedia page of Hofmann August Wilhelm von Hofmann - Wikipedia states that he was born on 8th April 1818 in Giessen, his father was Johann Philipp, and matriculated in 1836. Also the areas of dissertations are different, medicine versus chemistry. My conclusion is that these are two different persons.
Unfortunately, there's an inconsistence on the MGP profile of Hoffmann (ID 231029), as Max Le Blanc was not a student of his, but Hofmann's (acshist.scs.illinois.edu/bulletin_open_access/num21/num21 p44-50.pdf). This inconsistence was imported into Wikidata. Perhaps an ideal solution would be to split the Wikidata pages again and change the advisor of Max Le Blanc. What do you think?
Thank you, (~~~~) PKalnai (talk) PKalnai (talk) 18:06, 11 September 2023 (UTC)
- @PKalnai: I perfectly agree with your analysis, thanks for noticing the problem! Unmerged August Wilhelm von Hofmann (Q76360) vs August Wilhelm Hoffmann (Q102077512). Good evening, --Epìdosis 19:37, 11 September 2023 (UTC)
GND from VIAF Edit
Hi Epìdosis, thanks for working on Property talk:P227/human/wanted/import/from VIAF. Do you see a way to process a part of the "~89.000 items linked to a VIAF cluster" automatically? I'm thinking about: Year of birth and death identical, except athletes. @MisterSynergy: FYI. --Kolja21 (talk) 10:54, 14 September 2023 (UTC)
- Hi @Kolja21, MisterSynergy:, I think the previous import of GNDs from VIAF (Wikidata:Requests for permissions/Bot/MsynBot 12) was a significant success, both importing a relevant amount of IDs but also being prudent enough in avoiding to import not-so-certain ones. The main filter used in that import (besides the fundamental ones, "items about humans only, GND to be imported is nowhere found in all of Wikidata yet, item does not have any GND claim yet") was "year of birth identical in GND database and Wikidata", and it was done just about six months ago, so I fear that very few GNDs could be extracted if we keep this same limitation (anyway, the ones that could be extracted reiterating the same criteria of that impost should be extracted, IMHO we could set a periodic import e.g. once or twice a year). My guess (but precise data would be good to have a clearer picture of the situation) is that the great majority of these 89k GNDs have no dates. If this is true, the only significant criterium I can formulate, as of now, is the following: "same label (keeping into account that NAME SURNAME in WD is SURNAME, NAME in GND) + at least one coincident occupation (i.e. one of the vaues of occupation (P106) in WD has a GND ID (P227) which is also present in the GND". Of course cases in which the to-be-imported GND is already in another item should go into User:MisterSynergy/gnd-import for manual check, which is a very good way to discover past mistakes of various types. Anyway, this discussion is very interesting and worthwile to establish a general modus operandi which could possibly be applied in the future also to other VIAF members, so it's a great pleasure for me to discuss possible criteria with you! Good evening, --Epìdosis 17:30, 14 September 2023 (UTC)
Diktyon numbers on Wikidata Edit
Ciao Epìdosis,
today I was puzzled to find that apparently there is no WD property for Diktyon numbers yet. I have prepared a proposal. However, there is one thing I hope to clear up before I put it out there: To my knowledge, the IDs are universally called "Diktyon numbers" by the scholarly community even though the IDs are only used to refer to entries in the Pinakes database. Therefor, the name of the property should be "Diktyon" but the description should be "numeric ID for a Greek manuscript in the Pinakes database". From my point of view this is the correct way, but it might confuse others.
What do you think of this, as you are totus in illis when it comes to Pinakes?
Best, Jonathan Groß (talk) 12:05, 14 September 2023 (UTC)
- @Jonathan Groß: thanks for the proposal, very much needed! In fact, an import of manuscripts could also be relatively easy since some of the information in Pinakes have also been copied into a Wikibase, Biblissima (compare e.g. https://data.biblissima.fr/w/Item:Q220865 = https://portail.biblissima.fr/ark:/43093/mdataa17a41b9a58260eeaae26c37c4a9031beb749a78 = https://pinakes.irht.cnrs.fr/notices/cote/14932). Regarding the name Diktyon, I think it is related to http://www.diktyon.org/en/: "Diktyon is a scientific network of digital resources and databases on Greek manuscripts"; "The first step to create this network is based on the creation of unique identifiers for manuscripts shelf marks that will be integrated in the different databases from the beginning of 2014. The creation of identifiers for other common items (authors, texts, people) will then follow." So Diktyon is an effort, started about one decade ago, to use unique identifiers firstly for Greek manuscripts, basically (I guess) extending the use of Pinakes IDs (which are called "Diktyon" par excellece) since Pinakes was, and is, by far the most comprehensive (I would say basically all-comprehensive) database of Greek manuscripts. I don't have, in fact, more information about the Diktyon project than the ones present in the website, but this should at least answer partially your question :)
- BTW, speaking about Greek literature (and manuscripts), we miss TLG work IDs (which have two disadvantages, paywall and the difficulty of linking to TLG in general, but are anyway an important standard) and Pinakes work IDs (which are sometimes more comprehensive than TLG ones, mainly on Byzantine and post-Byzantine authors whose works aren't yet in TLG, and can be considered a relevant standard too); I haven't proposed them yet because I prefer proposing properties when I am confident about having some time to add at least some tens of values, but I'm sure it will be good having them sooner or later. Bye, --Epìdosis 13:03, 14 September 2023 (UTC)
Thank for the comment! I have now opened the proposal to discussion.
I share your scepticism about importing TLG work IDs at this point ... there is no way for us to meaningfully employ them here or check their statements in any way. Also, I'm having enough trouble getting only a few hundred items from the Clavis Historicorum Antiquitatis Posterioris into Wikidata (because I don't just create mini-items, I add qualified and referenced statements to them, so creating more than four at a time give me a headache). And I am also unsure to what degree we can rely on tools such as Mix'n'match to help us with our data. I have had nothing but trouble with my last two catalogues (MANTO and Mythoskop), both of which are missing several rows of data, and Magnus cannot seem to find out how to fix it.
I would love to have a reliable way of integrating external datasets with Wikidata, but given the state of MnM and my recent experiences I don't feel confident.
Sorry for unloading all of this on you. But I am sure you can relate. Best, Jonathan Groß (talk) 13:35, 14 September 2023 (UTC)
Mix'n'match Edit
Ciao! Tutto bene? A circa un anno di distanza, ti ricontatto per una questione simile... Complice un utente distratto se non vandalico (appena bloccato, ma ora chi controlla/annulla i suoi numerosi edit?) pare si siano prodotti errori in varie schede (riguardanti in particolare calciatori). Ho appena agito in questa, ma se provo a rimuovere l'errata associazione automatica Mix'n'match con l'entry su Transfermarkt che si riferisce ad un quasi omonimo, per evitare il ripetersi dell'aggiunta dell'identificativo (doppio), mi appare un messaggio di errore (mancanza di permessi?). Quando puoi, mi daresti qualche lume? Grazie mille! Sanremofilo (talk) 09:41, 16 September 2023 (UTC)
- @Sanremofilo: comprendo, basta che tu faccia il login in Mix'n'match e poi ricarichi la pagina, a quel punto non dovrebbe darti problemi; una volta fatto il login, dovrebbe permanere per varie settimane/mesi. Se il problema persiste, fammi sapere! --Epìdosis 09:44, 16 September 2023 (UTC)
Diktyon Edit
Ciao Epìdosis,
Now that we have Diktyon ID (P12042) (nice number :) we could also create a MnM catalogue. You said that the import of data would be fairly easy as the Diktyon items are part of a Wikibase. I would suggest a very lightweight approach. What kind of data could we even extract without issue? Best, Jonathan Groß (talk) 16:36, 24 September 2023 (UTC)
- @Jonathan Groß: it's a good question (the Wikibase is https://data.biblissima.fr/w/Accueil), but as of now I don't have an answer, but only a few doubts (maybe they are useful as well): 1) the general doubt is that I have never extracted data from a Wikibase ... this one seems to have an API but not a query service, so as of now I'm unsure about the best way to extract data from it; 2) whilst I'm sure that this Wikibase contains a fair amount of data about Greek manuscripts extracted from Pinakes (BTW, it also contains non-Greek manuscripts, e.g. https://data.biblissima.fr/w/Item:Q76680), I don't know exactly if all Greek manuscripts have been imported and when (checking some cases, it seems that a lot of Greek manuscripts have been imported in 2020, so probably manuscripts added to Pinakes after 2020 are not in this Wikibase); 3) I have a doubt about the usefulness of using Mix'n'match in this case, I usually recommend the use of MnM when in a catalog most entries are already present in Wikidata and/or when many of the entries of the catalog are not present in Wikidata but are also present in other Mix'n'match catalog, so MnM makes it easier to create new items starting from many catalogues instead of starting only from one - given these premises, in this case nearly all the manuscripts are absent in Wikidata and I don't know about other catalogues containing significant numbers of Greek manuscripts.
- So, to conclude, I would propose a different approach, which I try to detail below (I take as example manuscript from Pinakes https://pinakes.irht.cnrs.fr/notices/cote/65287/). Excuse me for the length :(
- 1) firstly I would like to clarify one point of the data model of manuscripts (i.e. Wikidata:WikiProject Books#Manuscript properties), i.e. how to describe their collocation: it can be divided into three parts, i.e. city, institution and archival fond of the institution; as far as I see from the present data model, the city falls into location (P276) (reasonable), but then problems start, because the data model has both collection (P195) qualified with inventory number (P217) and the inverse, inventory number (P217) qualified with collection (P195) (see e.g. Shahnamah of Ibrahim Sultan (Q53676578)) - this is the first problem, only one option has to be chosen, and the other discarded, duplication of data inside the same item is surely a problem - unfortunately collection (P195) has as value not the most precise one, i.e. the archival fond, but the institution - so, second problem, how do we link the single manuscript to its archival fond of pertinence? I think the first practical thing we need to do is discussing these two points, establish a precise data model and also arrange one or two Wikidata:Showcase items of manuscripts for future reference
- 2) we also need to check that we have all the items for cities, institutions and archival fonds; cities are surely all present, I'm also confident we have most institutions, but we will need to create a few hundreds of archival fonds; BTW, Pinakes has IDs for countries, cities, institutions and archival fonds, and having these four properties (or at least the last two ones) would probably facilitate our check ... if you agree, I think we can propose them as a 4-batch (similarly to e.g. Wikidata:Property proposal/SDBM IDs)
- 3) when the points 1 and 2 are OK, we can safely proceed with the real import: I think that, given the situation, the best solution will be importing massively all the ~80k manuscripts in new items modeled more or less like this: label in English and French copied from Pinakes (but I would omit the country, I think it's a bit redundant), e.g. "Città del Vaticano, Biblioteca Apostolica Vaticana (BAV), Ott. gr. 046"; instance of (P31)manuscript (Q87167); location (P276)city; 1 or 2 statements for the institution, the archival fond and the inventory number; Diktyon ID (P12042)number. I estimate a maximum of a few hundreds duplicates (i.e. already existing manuscripts), which will become more evident after we have all the manuscripts imported. Of course this point 3 presupposes we have a complete list of the manuscripts in Pinakes, but, since I think the points 1 and 2 will need at least some weeks to be solved, I'm confident in the meanwhile I can obtain this list (the way I'm thinking about is a scrape via numerus currens, i.e. checking all possible Diktyon numbers from https://pinakes.irht.cnrs.fr/notices/cote/1/ to https://pinakes.irht.cnrs.fr/notices/cote/79842/, the highest Diktyon presently existing; it will take just a few days).
- If you agree with the above plan, I think that: A) we should start as soon as possible a reflection about 1, because these discussions about data models are often very long (argh!); B) we can start also checking institutions and archival fonds per 2 (if you want, in a few days I can obtain a complete list of them; if we propose properties for them, I can create their Mix'n'match catalogs immediately after their properties are created), I would very much like to help you in this, especially for Italian institutions and archival fonds; C) I can get in a few weeks a complete list of the manuscripts, to be ready for the massive import when 1 and 2 will be completed. Let me know! --Epìdosis 17:32, 24 September 2023 (UTC)
I'm thrilled for this prompt, qualified, informed and comprehensive answer, much more than I could have hoped for with my lazy ping. I agree that we need to clear up our data model first, and I agree with your overall approach. We can get started right away, I just don't know how much time I can put in over the next days as things are quite hectic at home. All the best, Jonathan Groß (talk) 19:51, 24 September 2023 (UTC)
I will give the data model a bit more thought. Meanwhile, you got mail! Jonathan Groß (talk) 11:44, 28 September 2023 (UTC)
So I browsed a bit and found Codex Coislinianus (Q1105939) – an interesting case, as the remnants of this manuscript are scattered across 8 libraries in 6 countries. How would we deal with those? Create items for the disiecta membra with Coislin 202part of (P361)Codex Coislinianus (Q1105939) and the inverse Codex Coislinianus (Q1105939)has part(s) (P527)Coislin 202? I think that would be a good course of action. Jonathan Groß (talk) 12:20, 29 September 2023 (UTC)
And there are, of course, cases where it is the other way around, two separate codices bound together. Like with Lectionary 61 (Q6512323) and Minuscule 729 (Q6870880), both part of Diktyon 49751. I would suggest creating dedicated items for these cases as well. Jonathan Groß (talk) 12:44, 29 September 2023 (UTC)
In case of Lectionary 61 (Q6512323) and Minuscule 729 (Q6870880), I did what I suggested earlier today and created a new item for the manuscript as it is bound today. Now on towards clearing up our data model on manuscripts. As Wikidata does not have a way to string hierarchical properties together, we would have to give each manuscript separate statements for their location (country, city, institution), fonds, and shelf number. Keeping our current data model intact should be a priority, along with compliance with scientific standards. Taking up what you said in your initial, exhaustive response, all manuscript items should have statements for:
- country = country (P17)
- city: location (P276) should work, although this property has a broader scope and is often used for finer granularity of values (meaning city districts, specific buildings, street adresses etc.). But for our purposes, that should not pose a problem.
- institution: collection (P195) is a good candidate, but I have some gripes with it. 1) As of now it is mostly used to specify the value of a holding institution (bestandshaltende Institution) like Bibliothèque nationale de France (Q193563), but its name (en: collection, de: Sammlung) suggests something more along the lines of "fonds" or "(private) collection". 2) The data duplication you mentioned, because it is used as a qualifier to inventory number (P217) – and vice versa. And both properties have constraints prompting users to duplicate the data as well.
In my opinion, the best course of action would be to dissolve 2) by removing the constraint in both properties and keeping to the practice of using collection (P195) for holding institutions, but also specifying that this property is meant to do just that, and change its name and alias accordingly. Of course this can only be done after discussing the matter with the community. - fonds: Minding the ambiguity of collection (P195) as it is today, we should propose and create a new property.
- shelf mark: These of course are part of inventory number (P217) or identical to it, depending how you look at it. Pinakes only has the running number, but in scholarship location, institution and fonds are often used before that. I think the best course of action would be to use inventory number (P217) with at least fonds and shelf mark as a string, maybe even city, institution, fonds and shelf mark as in Pinakes. And of course, the data duplication issue with collection (P195) woudl have to be addressed.
What do you think? Should we move this discussion to the WikiProject Books? Jonathan Groß (talk) 18:47, 29 September 2023 (UTC)
- @Jonathan Groß: thanks very much for these reflections. Of course I agree that, for manuscripts composed of two previous ones and for ancient manuscripts divided into two, the best possible solution is having three items interlinked through part of (P361)/has part(s) (P527). About the fundamental statements, I agree on country (P17) for countries and location (P276) for cities; regarding collection (P195) I exactly agree with the solution you propose, changing its labels/descriptions/aliases and constraints to keep it only for the institution and to remove the redundancy with inventory number (P217), of course it needs community discussion (and a significant effort of cleaning after the - hoped - approval); I agree also on the need of a new property for archival fonds (I think we can propose it a few days after we have Pinakes property for archival fonds and we have started creating a few ones); finally good using inventory number (P217) for the shelf mark, in which I would probably include city, institution, fonds and shelf mark, and I would not qualify it with collection (P195) in order to avoid redundancy. Finally, I think we should submit our proposal about reforming in the aforementioned ways the usage of inventory number (P217) and collection (P195) to WikiProject Books, starting soon the discussion and hoping it evolves in reasonable time :) Good night, --Epìdosis 22:51, 29 September 2023 (UTC) BTW, here in Lisbon at Wikidata Days both yestarday and today (and, I think, also tomorrow) we have spoken a lot about data modeling of manuscripts!
Thanks for your answer. It seems we are in agreement. This gives me hope that we might be able to move things along in due time. Meanwhile, have fun at Lisbon and give my best to everyone there! Jonathan Groß (talk) 22:54, 29 September 2023 (UTC)
So I started adding Diktyon numbers to existing Wikidata items for manuscripts. My thought was that we might avoid creating hundreds of duplicates later with the data import. I thought there would be a few hundred items at most, but it seems there are a lot more. I've managed to assign numbers for most of the manuscripts from the BNF collections, but it took an entire evening. And there are some lists that show how much more work this would entail: NT uncials, NT lectionaries, NT minuscules 1–1000, 1001–2000 and 2001–. And there are some more lists and items in the en:Category:Greek-language manuscripts.
The problem with my approach is that it is inefficient. Even when focussing on a single institution like the BNF, I have several tabs open with lists of the various fonds and need to switch between them (the header of each an every page is "Pinakes", which is not helping). I need to ascertain that the shelf number on Wikipedia is correct (it wasn't in two or three cases). And even when things go smoothly (alternating only between Grec and Supplement Grec, assignment is correct) it takes at least 30 seconds to find, copy and paste the Diktyon number to the Wikidata item. As an added problem (which I already described above), many items describe codices or codicological units which belong to several Pinakes entries or only part of a single Pinakes entry. This is especially true for Uncials (majuscule manuscripts) which are scattered across the planes or sometimes encompass only a single leaf of a manuscript.
The reason it takes so long to find a Diktyon number for a specific manuscript is that the Pinakes search function is very cumbersome. To look up a manuscript you must enter (in French or English) its country, city, institution, fonds AND shelf number, all of which must be chosen from hierarchical drop-down menus. Google search doesn't really help as it mostly produces wrong results; it does help, however, in getting quicker to the fonds in question. To me this demonstrated the usefulness and importance of our endeavour to integrate the Pinakes data into Wikidata.
I am unsure if I should keep on adding Diktyon numbers in this way. Maybe I will work only the on straightforward cases where the Wikidata item is identical to the Pinakes item. This should avoid a few hundred duplicate items. But perhaps this isn't even adviseable? I'd love your feedback.
So long, and have a great weekend, Jonathan Groß (talk) 06:53, 30 September 2023 (UTC)
- @Jonathan Groß: wow, I also thought about there being very few Greek manuscripts already in Wikidata ... the thing I would suggest to do is in fact starting the match of manuscripts whose name on Wikidata coincides more or less with the shelf mark in Pinakes, for the others we can think more thoroughly about possible strategies. And of course we need to start the discussion in WikiProject Books, hoping to encounter good agreement on the issues. I will start working thoroughly on this probably in the middle of next week, after having settled a batch of things which I accumulated during my good days in Lisbon :) --Epìdosis 09:55, 30 September 2023 (UTC)
Meyer, Johann, 17. sec. <Autore> Edit
Hello Epìdosis, is there an authority record for Jean Meyer (Q122829014), author of Le maitre de langue muet (Trento Biblioteca Diocesana Vigilianum) or can you create on? The book was printed in Nürnberg that's why he is stated as "Johann". I didn't find it in the Bibliothèque nationale de France. --Kolja21 (talk) 12:29, 26 September 2023 (UTC)
- @Kolja21: unfortunately not, since the book is not in SBN because only two libraries of Alto Adige (province of Bolzano) and no libraries of Trentino (province of Trento) are presently in SBN. I hope they will enter in SBN sooner or later ... --Epìdosis 12:43, 26 September 2023 (UTC)
Richiesta pulizia di un elemento Edit
Ciao Epìdosis, scusami il disturbo, ma volevo chiederti se quando hai un po' di tempo libero se ti è possibile oscurare il campo oggetto di queste due modifiche (prima e seconda) di un elemento presente qui su Wikidata in quanto contengono una parola offensiva; credo che basti oscurare semplicemente il campo oggetto piuttosto che tutte le modifiche nel mezzo. Ti ringrazio in anticipo. Pazio Paz (talk) 22:06, 26 September 2023 (UTC)
Æneid of Virgil Edit
There are multiple editions of the Conington translation. In this edit you put an ID for a 3rd edition from 1870 on a data item for the first edition from 1866. --EncycloPetey (talk) 16:41, 27 September 2023 (UTC)
- @EncycloPetey: thanks for noticing, I saw another user had added the ID in the wrong format and so I readded it in the correct format, but I assumed the ID itself was correct, however it wasn't. --Epìdosis 22:23, 27 September 2023 (UTC)