Wikidata:Bot requests
Bot requests
If you have a bot request, add a new section using the button and tell exactly what you want. To reduce the process time, first discuss the legitimacy of your request with the community in the Project chat or in the Wikiprojects's talk page. Please refer to previous discussions justifying the task in your request. For botflag requests, see Wikidata:Requests for permissions. Tools available to all users which can be used to accomplish the work without the need for a bot:
|
![]() |
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2023/09. |
![]() |
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 15 days.
|
Cleaning of streaming media services urls Edit
Request date: 12 December 2020, by: Swicher
I'm not sure if this is the best place to propose it but when reviewing the urls of a query with this script:
import requests
from concurrent.futures import ThreadPoolExecutor
# Checks the link of an item, if it is down then saves it in the variable "novalid"
def check_url_item(item):
# Some sites may return error if a browser useragent is not indicated
useragent = 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77'
item_url = item["url"]["value"]
print("Checking %s" % item_url, end="\r")
req = requests.head(item_url, headers = {'User-Agent': useragent}, allow_redirects = True)
if req.status_code == 404:
print("The url %s in the element %s returned error" % (item_url, item["item"]["value"]))
novalid.append(item)
base_query = """SELECT DISTINCT ?item ?url ?value
{
%s
BIND(IF(ISBLANK(?dbvalue), "", ?dbvalue) AS ?value)
BIND(REPLACE(?dbvalue, '(^.*)', ?url_format) AS ?url)
}"""
union_template = """ {{
?item p:{0} ?statement .
OPTIONAL {{ ?statement ps:{0} ?dbvalue }}
wd:{0} wdt:P1630 ?url_format.
}}"""
properties = [
"P2942", #Dailymotion channel
"P6466", #Hulu movies
"P6467", #Hulu series
]
# Items with links that return errors will be saved here
novalid = []
query = base_query % "\n UNION\n".join([union_template.format(prop) for prop in properties])
req = requests.get('https://query.wikidata.org/sparql', params = {'format': 'json', 'query': query})
data = req.json()
# Schedule and run 25 checks concurrently while iterating over items
check_pool = ThreadPoolExecutor(max_workers=25)
result = check_pool.map(check_url_item, data["results"]["bindings"])
I have noticed that almost half are invalid. I do not know if in these cases it is better to delete or archive them but a bot should periodically perform this task since the catalogs of streaming services tend to be very changeable (probably many of these broken links are due to movies/series whose license was not renewed). Unfortunately I could only include Hulu and Dailymotion since the rest of the services have the following problems:
- Always return OK (even with invalid links): Xfinity Stream ID (P8823), Hoopla title ID (P5680), YouTube video ID (P1651), YouTube channel ID (P2397), YouTube playlist ID (P4300)
- Return error 403: Netflix ID (P1874), HBO Max ID (P8298) (this seems to work only in countries where the service is available).
For those sites it is necessary to perform a more specialized check than a HEAD request (like using youtube-dl (Q28401317) for Youtube).
In the case of Hulu I have also noticed that some items can have two valid values in Hulu movie ID (P6466) and Hulu series ID (P6467) (see for example The Tower of Druaga (Q32256)) so you should take that into account when cleaning links.
- Request process
request to add identifiers from FB (2021-02-11) Edit
Thanks to a recent import, we currently have more than >1.2 items where the only identifier is Freebase ID (P646). However, checking https://freebase.toolforge.org/ some of them have identifiers available there.
Samples:
- Pass the Plate (Q1537223) → https://freebase.toolforge.org/m/03yff7k → /source/videosurf/211021 /source/clicker/tv/pass-the-plate
- Strongest Chil Woo (Q484295) → https://freebase.toolforge.org/m/0h1d8vd → /authority/thetvdb/series/83143 /user/ovguide/tvdb_show_id/83143
- Dear Mother...Love Albert (Q5246901) → https://freebase.toolforge.org/m/07cfvgz → /authority/thetvdb/series/274836 /authority/tvrage/series_numeric/8341
- Sisters Over Flowers (Q15116349) → https://freebase.toolforge.org/m/011sn4j8 → /user/ovguide/tvdb_show_id/285342
- Naked and Funny (Q50927) → https://freebase.toolforge.org/m/05bzv2w → /authority/thetvdb/series/119391 /user/ovguide/tvdb_show_id/119391
See Wikidata:Project_chat#Freebase_(bis) for discussion.
- Task description
Import ids where available. Map keys to properties if not available at Wikidata:WikiProject_Freebase/Mapping.
- Discussion
- Request process
Request to change lexeme forms' grammatical features (2021-07-08) Edit
Request date: 8 July 2021, by: Bennylin
- Link to discussions justifying the request
- Hello, my bot, OrophinBot is adding several thousand Indonesian lexemes. In one of the batch (500 lexemes), I made a slight mistake, where the grammatical features are listed as active (Q1317831), where it should've been passive (Q1194697). Example: tercapak (L576679) 1456262214 -> [1]
- Task description
How can I change grammatical features of form? (I operate bot, I just need to know the commands). I have the list of lexemes. I reckon this should be not too hard, I'm just not familiar with the command to do the changes.
- Licence of data to import (if relevant)
- Discussion
- Request process
request to merge MNAC dups. (2021-11-13) Edit
Request date: 13 November 2021, by: Jura1
- Task description
Back in 2016, there seems to have been some duplication between two bots. Compare:
- https://www.wikidata.org/w/index.php?title=Q27572303&action=history
- https://www.wikidata.org/w/index.php?title=Q27941371&action=history
It showed up for several works at Wikidata:WikiProject_sum_of_all_paintings/Creator/Ramon_Casas_i_Carbó in Museu Nacional d'Art de Catalunya (Q861252) and Museu Nacional d'Art de Catalunya (Q23681318).
The idea to identify all of them (for other artists as well) and merge them.
- Discussion
- Request process
request to undo merge EC meetings (2021-12-02) Edit
Request date: 2 December 2021, by: Jura1
- Link to discussions justifying the request
- Task description
- Licence of data to import (if relevant)
- Discussion
- Request process
request to delete statements and sitelinks and merge items: dewiki duplicates (2021-12-16) Edit
Request date: 16 December 2021, by: Jura1
- Task description
- The items mentioned at Wikidata:Project_chat/Archive/2021/12#items_for_redirects_with_subject_has_role_(P2868)_=_Wikimedia_redirect_page_(Q21528878) mostly include items with a sitelink to dewiki, a series of statements and an indication that it's a duplicate of another item with permanent duplicated item (P2959) (sample: [2])
- delete sitelink [3]
- delete statements [4][5]
- merge the two items [6][7]
- Discussion
- Request process
Request to merge similar statements on items created by QuickStatements batch #104278 (2022-11-05) Edit
Request date: 5 November 2022, by: RPI2026F1
- Link to discussions justifying the request
- Task description
I tried to make one claim with two references using Quick Statements but it created two identical claims and the references in undefined order. I need these two claims to be merged together (they are the same value) and both references should be kept.
- Licence of data to import (if relevant)
- Discussion
- Request process
Request to link Icelandic categories, reusing article sitelink info (2022-11-05) Edit
Request date: 6 November 2022, by: Snævar
- Link to discussions justifying the request
- Do I need one?
- Task description
Link Icelandic categories with English categories according to quarry:query/25561. In the query the Icelandic category is on the left and the English on the right. The query uses sitelinks of articles in Icelandic where they link to English articles, finds an Icelandic category with the same name as the article and reuses the sitelink information to the English article to be used for the category. Basically, reusing information on the article and transferring it to an category with the same name.
Skip on any conflict. If the english category is an redirect, follow the redirect. (The Icelandic categories are not redirects, that has been excluded in the query.) The english categories have to be checked, they may or may not exist. This may also involve merging the item with the Icelandic sitelink to the item with the English category, have not checked that either. If the Icelandic sitelink is in an item it is the only one there, that has been checked.
- Licence of data to import (if relevant)
Not relevant, considered uncopyrightable
- Discussion
- Request process
Request to periodically add VIAF IDs to humans (2022-11-06) Edit
Request date: 6 November 2022, by: Epìdosis
- Task description
Given an item with the all the following features:
- instance of (P31)human (Q5),
- at least one not-deprecated VIAF-member ID (for a least of VIAF-member properties use https://w.wiki/PRN),
- no VIAF ID (P214) in whichever rank;
the bot should
- check if all the not-deprecated VIAF-member IDs contained in the item are present in one or more VIAF clusters;
- all the VIAF cluster-IDs obtained in this way should be added by the bot to the item with the following reference: stated in (P248)Virtual International Authority File (Q54919) + VIAF ID (P214)cluster ID + retrieved (P813)date of retrieval + based on heuristic (P887)inferred from VIAF ID containing an ID already present in the item (Q115111315) (example edit).
Note 1: the restriction to personal items, thus to VIAF personal clusters, is due to the fact that the quality of VIAF non-personal clusters is much lower (i.e. there are many conflations), so it is better avoiding a massive addition of conflated clusters, which could afterwards attract not-pertinent IDs.
Note 2: Property talk:P214/Duplicates/humans (just emptied) can be easily used to monitor the quality of additions and evaluate if their quality will be low enough to justify an eventual suspension of the bot activity.
Note 3: as in the title, I think the bot should do this job not once, but on a periodical basis, ideally once a month in my opinion (but once in two/three/six months would be good as well)
- Discussion
- @Epìdosis: I am unable to do this automatically periodically, but I recently did this for some NL CR AUT ID (P691) items and if you have some particular property to work on, I may be able to do it at least once.Vojtěch Dostál (talk) 18:42, 5 July 2023 (UTC)
- @Vojtěch Dostál: OK, good. Once for some properties is surely better than nothing! Could you start trying with the following: SBN author ID (P396), Vatican Library VcBA ID (P8034), CONOR.SI ID (P1280)? Afterwards we can see the results and evaluate further ones. Thanks in advance! --Epìdosis 19:37, 5 July 2023 (UTC)
- OK. As for the process - I understand all the steps but what about if the particular VIAF ID is already present in a *different* item? Should it still be added? Vojtěch Dostál (talk) 06:31, 6 July 2023 (UTC)
- @Vojtěch Dostál: I think that, "if the particular VIAF ID is already present in a *different* item", two solutions are possible: the easiest is add it anyway (I periodically check the report of humans having the same VIAF, so I will spot and fix them); the other one, which I would slightly prefer, is the one adopted in Wikidata:Requests for permissions/Bot/MsynBot 12, which is creating a table for the problematic cases (in that case, this table) - the table could be simply 3 columns, first the VIAF cluster, second the item already containing it, third the item where it should be added by the bot - and then a user (I) progressively empties manually the table, solving all such cases. If the second solution is too complex, of course, I'm fine with the first and I will check the cases anyway through the general report. Let me know! Thanks, --Epìdosis 07:18, 6 July 2023 (UTC)
- @Vojtěch Dostál: I've started seeing your edits in my watchlist, they are perfect. Here a little example of the usefulness of the operation: Q113135789 two VIAFs added, one from SBN and the other from BAV, and the two helped me in spotting a duplication in ISNI and in SUDOC, while also adding GND and LCNAF. Great job! --Epìdosis 10:55, 6 July 2023 (UTC)
- Nice, I am glad that it helps. The edits are in these groups: [8], [9] and [10]. Also, I prepared the aforementioned report for you here: User:Vojtěch Dostál/viaf already somewhere. If you spot a problem in some of my edits, tell me and I'll fix it. Have a nice day! Vojtěch Dostál (talk) 11:28, 6 July 2023 (UTC)
- @Epìdosis Tagging you for notification, I forgot to do it above. Vojtěch Dostál (talk) 19:55, 6 July 2023 (UTC)
- @Vojtěch Dostál: I've started seeing your edits in my watchlist, they are perfect. Here a little example of the usefulness of the operation: Q113135789 two VIAFs added, one from SBN and the other from BAV, and the two helped me in spotting a duplication in ISNI and in SUDOC, while also adding GND and LCNAF. Great job! --Epìdosis 10:55, 6 July 2023 (UTC)
- @Vojtěch Dostál: I think that, "if the particular VIAF ID is already present in a *different* item", two solutions are possible: the easiest is add it anyway (I periodically check the report of humans having the same VIAF, so I will spot and fix them); the other one, which I would slightly prefer, is the one adopted in Wikidata:Requests for permissions/Bot/MsynBot 12, which is creating a table for the problematic cases (in that case, this table) - the table could be simply 3 columns, first the VIAF cluster, second the item already containing it, third the item where it should be added by the bot - and then a user (I) progressively empties manually the table, solving all such cases. If the second solution is too complex, of course, I'm fine with the first and I will check the cases anyway through the general report. Let me know! Thanks, --Epìdosis 07:18, 6 July 2023 (UTC)
- OK. As for the process - I understand all the steps but what about if the particular VIAF ID is already present in a *different* item? Should it still be added? Vojtěch Dostál (talk) 06:31, 6 July 2023 (UTC)
- @Vojtěch Dostál: OK, good. Once for some properties is surely better than nothing! Could you start trying with the following: SBN author ID (P396), Vatican Library VcBA ID (P8034), CONOR.SI ID (P1280)? Afterwards we can see the results and evaluate further ones. Thanks in advance! --Epìdosis 19:37, 5 July 2023 (UTC)
- Request process
Request to downrank unofficial mastodon addresses (2022-11-19) Edit
Request date: 19 November 2022, by: Shisma
- Link to discussions justifying the request
- Task description
All statements of Mastodon address (P4033) that have qualifiers like object has role (P3831) → (unofficial (Q29509080)|mirror storage (Q654822)) should be deprecated. Should affect all these statements.
- Discussion
- Request process
Request to make sitelinks of asteroid items intentional (2022-12-08) Edit
Request date: 8 December 2022, by: ChristianKl
- Task description
Turn all sitelink to redirect (Q70893996)-badges on sitelinks to enwiki, tlwiki and ptwiki into intentional sitelink to redirect (Q70894304)-badges for all items that are instance of (P31) asteroid (Q3863). While most Wiki's have individual articles for individual asteroids those Wikis, use lists that show multiple asteroids and there are redirect that point toward those lists.
- Discussion
- Request process
Request to move sitelinks to redirect on year items to be intentional (2022-12-08) Edit
Request date: 8 December 2022, by: ChristianKl
- Task description
Many Wikis have items for each individual year. Other Wikis have only items that cover multiple decades or centuries together. For all items with instance of (P31) year BC (Q29964144), move all sitelink to redirect (Q70893996) to redirects on dewiki, enwiki, itwiki, ltwiki, ocwiki and tlwiki to be intentional sitelink to redirect (Q70894304). For all instance of (P31) year (Q577) move all sitelink to redirect (Q70893996) to redirects on ocwiki and tlwiki to be intentional sitelink to redirect (Q70894304).
- Discussion
- Request process
Request to remove constraint violating charge (P1595) claims (2022-12-08) Edit
Request date: 8 December 2022, by: ChristianKl
- Link to discussions justifying the request
- https://www.wikidata.org/wiki/Wikidata:Property_proposal/Archive/27#P1595
- https://www.wikidata.org/wiki/Wikidata:Living_people
- Task description
Given the principle of the presumption of innocence it's problematic to feature charges centrally on items of living people. As a result the initial design of charge (P1595) was to have it on the item for the trial in question and not on the item for the person. There was never any consensus to extend the domain. Yet people used it in a constraint violating way. Given that we have a clear policy, I don't think we need a consensus finding discussion to decide to remove the statements that are problematic for privacy.
Therefore, I suggest to run a bot that removes all charge (P1595) from human (Q5).
- Licence of data to import (if relevant)
- Discussion
- False accusation that must be removed are rare on Wikidata. Instead, the problem is Wikipedia and Wikidata don't have items for the trial yet. If anybody wants to remove these statements, please make sure they are migrated to new items for these trials. Midleading (talk) 08:43, 17 September 2023 (UTC)
- Request process
Request to replace qualifiers (2023-01-02) Edit
Request date: 2 January 2023, by: M2545
- Link to discussions justifying the request
- Task description
For all the cases in which participant (P710) statements are qualified with subject has role (P2868), replace with the qualifier object has role (P3831).
- Licence of data to import (if relevant)
- Discussion
- Request process
Request to precise the birth and death date of people with property born and or died on January 1 (with ODIS number) (2023-01-03) Edit
Request date: 3 January 2023, by: MessensFien
- Link to discussions justifying the request
https://www.wikidata.org/wiki/User_talk:MessensFien
- Task description
Currently there are a few thousands persons with an ODIS ID that have a too precise of a birth and/or death date (always the 1st of January). Would it be possible to precise these statements to just the year (so delete 1st of January)?
Here the query for the people with property born on January 1: https://w.wiki/6BVE
And here for people with property died on January 1: https://w.wiki/6BVF
- Discussion
I see here http://www.odis.be/lnk/en/PS_126253 (from Aenne Brauksiepe (Q272663)) the date is actually January 1st.
Here Johan Van Gastel (Q29336115) the date is duplicated causing constraint violations.
Just my first two random checks. --Bean49 (talk) 11:53, 4 January 2023 (UTC)
- I generated two lists of ODIS ID persons with their death or birth date on the 1st of January. Would it be possible to keep their entire dates in Wikidata? For the others (birth: https://w.wiki/6BVE, and death: https://w.wiki/6BVF); if there is no duplication, would it be possible to only keep the year. If there is a duplication, would it be possible to delete this in order to cause no constraint violantions? Thanks in advance!
- For persons with death date on the 1st of January: PS_61, PS_8117, PS_11065, PS_11318, PS_11331, PS_444, PS_2771, PS_28640, PS_77288, PS_28373, PS_32616, PS_30192, PS_31385, PS_31773, PS_28188, PS_28978, PS_72861, PS_77304, PS_77305, PS_77878, PS_74619, PS_27964, PS_30880, PS_30586, PS_32118, PS_29201, PS_111295, PS_95186, PS_33806, PS_88469, PS_98687, PS_99353, PS_89621, PS_90379, PS_92169, PS_82497, PS_80814, PS_79404, PS_79412PS_15186, PS_19644,PS_15878, PS_15918, PS_13008, PS_26331, PS_17653, PS_17680, PS_16340, PS_17418, PS_80485, PS_17094, PS_57901, PS_21814, PS_21007, PS_21027, PS_21061, PS_21609, PS_86126, PS_20161, PS_86834, PS_18840, PS_104023, PS_64964, PS_109989, PS_110002, PS_106664, PS_66638, PS_87889, PS_120639, PS_95125, PS_127490, PS_127744, PS_126117, PS_127863, PS_129713, PS_129080, PS_67230, PS_70651, PS_69048, PS_132751, PS_132291, PS_131277, PS_133862, PS_133212, PS_138640, PS_142740, PS_124415, PS_139297, PS_132589, PS_147260, PS_154167, PS_143689, PS_144315, PS_154349, PS_152238, PS_145522, PS_151459, PS_145009, PS_152539, PS_146814, PS_160214, PS_161712, PS_157177, PS_175950, PS_177305, PS_173396, PS_173012, PS_168250, PS_167109, PS_168552, PS_174628, PS_176740, PS_167755, PS_172056, PS_175286
- For persons with birth date on the 1st of January: PS_2613, PS_8261, PS_375, PS_404, PS_32933, PS_31335, PS_31042, PS_77321, PS_4645, PS_116666, PS_113971, PS_111506, PS_29175, PS_111614, PS_33520, PS_96895, PS_91383, PS_90799, PS_94855, PS_94917, PS_95546, PS_89357, PS_90015, PS_89443, PS_15800, PS_15839, PS_12985, PS_18381, PS_15874, PS_26347, PS_17389, PS_78333, PS_18228, PS_79675, PS_79743, PS_21870, PS_21907, PS_20824, PS_19912, PS_19230, PS_87356, PS_20873, PS_70418, PS_70424, PS_64476, PS_68163, PS_70216, PS_64820, PS_103939, PS_120708, PS_127776, PS_129053, PS_127095, PS_129452, PS_67193, PS_66341, PS_67214, PS_65127, PS_127217, PS_138082, PS_138396, PS_135352, PS_134828, PS_135838, PS_142035, PS_130008, PS_132818, PS_128679, PS_142105, PS_144079, PS_152163, PS_152589, PS_145018, PS_160971, PS_157609, PS_161733, PS_165425, PS_166672, PS_166697, PS_172452, PS_172557, PS_176251, PS_169216, PS_174715, PS_169876, PS_172564, PS_168033, PS_179486, PS_178864, PS_178932 MessensFien (talk) 15:25, 5 January 2023 (UTC)
- @MessensFien Done, see https://editgroups.toolforge.org/b/wikibase-cli/9b415e0f29384/ . Vojtěch Dostál (talk) 08:00, 6 July 2023 (UTC)
- Request process
Request to .. (2023-01-07) Edit
Request date: 7 January 2023, by: Laurameadows
- Link to discussions justifying the request
update information on projects and bio
- Task description
https://www.imdb.com/name/nm2568981/?mode=desktop
- Licence of data to import (if relevant)
- Discussion
- Request process
Request to delete spaces from ISNI IDs (2023-02-15) Edit
Request date: 15 February 2023, by: Epìdosis
- Link to discussions justifying the request
- Task description
Remove spaces in all occurrences of ISNI (P213) (main values and references); possibly the changes should be performed quickly, in order to minimize the coexistence of the formats with and without spaces (which could be confusing for data reusers).
- Discussion
- Request process
Request to Bot to import U.S. Patents or request for permission of my bot account (2023-02-28) Edit
Request date: 28 February 2023, by: LucaDrBiondi
- Task description
Bot to import U.S. Patents
- I have a list of u.s. patents and i would import these data into wikidata. Actually data are on a csv file.
for example: US11387036; Inductor device ;Patent number: 11387036;Type: Grant ;Filed: Mar 19, 2020;Date of Patent: Jul 12, 2022;Patent Publication Number: 20200312521;Assignee REALTEK SEMICONDUCTOR CORPORATION (Hsinchu) Inventors: Hsiao-Tsung Yen (Hsinchu), Ka-Un Chan (Hsinchu) ;Primary Examiner: Adolf D BerhaneAssistant Examiner: Afework S Demisse;Application Number: 16/823,557
I have already create a bot account (LucaDrBiondi@Biondibot) and now, I think, I need the "request for permission". it is right?
Then i would try to write a my bot. I will use curl in language. I write this code just to connect to wikidata through my bot:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <curl/curl.h>
#define API_ENDPOINT "https://www.wikidata.org/w/api.php"
// Data structure for storing the login token
typedef struct {
char token[1024];
} TokenData;
// Callback function for writing data received from the server
size_t write_callback(char *ptr, size_t size, size_t nmemb, void *userdata) {
size_t bytes = size * nmemb;
fwrite(ptr, size, nmemb, stdout);
return bytes;
}
// Callback function for receiving the login token
size_t token_callback(char *ptr, size_t size, size_t nmemb, void *userdata) {
size_t bytes = size * nmemb;
TokenData *data = (TokenData *)userdata;
char *start = strstr(ptr, "\"logintoken\":\"");
if (start != NULL) {
char *end = strchr(start + 15, '"');
if (end != NULL) {
size_t token_len = end - (start + 15);
strncpy(data->token, start + 15, token_len);
data->token[token_len] = '\0';
}
}
return bytes;
}
int main() {
// Initialize CURL
CURL *curl = curl_easy_init();
if (curl) {
// Step 1: Get login token
TokenData token_data = {0};
curl_easy_setopt(curl, CURLOPT_URL, API_ENDPOINT);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
curl_easy_setopt(curl, CURLOPT_POST, 1L);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "action=query&meta=tokens&type=login&format=json");
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, token_callback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &token_data);
curl_easy_perform(curl);
// Step 2: Login
char login_data[1024];
snprintf(login_data, sizeof(login_data), "action=login&lgname=%s&lgpassword=%s&lgtoken=%s&format=json",
"xxx@xxxx", "xxxxxx", token_data.token);
printf("token_data.token : %s", token_data.token);
curl_easy_setopt(curl, CURLOPT_URL, API_ENDPOINT);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
curl_easy_setopt(curl, CURLOPT_POST, 1L);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, login_data);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_callback);
curl_easy_perform(curl);
...
curl_easy_cleanup(curl);
}
return 0;
}
But i get the following error:
{"login":{"result":"Failed","reason":"Unable to continue login. Your session most likely timed out."}}
Could you someone help me to do a step forward, please?
I wrote more than 500 pages in it.wikipedia but in wikidata ...i am a goat! but i want to learn.
- Licence of data to import (if relevant)
- Discussion
- Request process
Request to unify links to the Czech Bridge Management System (2023-04-16) Edit
Request date: 16 April 2023, by: ŠJů
- Link to discussions justifying the request
Supposed as non-controversial, most of the affected references (maybe all) were added by me. The purpose of the request is to unify the links to identical source to the preferred form.
- Task description
Please find all references where reference URL (P854) is "http://bms.clevera.cz/Public", "http://bms.clevera.cz" or "http://bms.clevera.cz/" and replace it with stated in (P248) = Bridge Management System (Q108147651) (if the reference contains some qualifiers, eg. retrieved (P813), transfer them to the new form).
- Discussion
These links reference the BMS web page generally. In most cases, a direct link to a detail of a specific bridge would be more appropriate, but unfortunately, the BMS page (http://bms.clevera.cz/Public) as well as the road net map (https://geoportal.rsd.cz/apps/silnicni_a_dalnicni_sit_cr_verejna/) seem to not enable direct URL links to details of road objects. If somebody discovers a way to refer directly to a detail of a specific object with a specific public URL, that would certainly be appropriate to apply such a method in the references.
- Request process
Request to automatically add P11780 from ORCID and other identifiers (2023-05-09) Edit
Request date: 9 May 2023, by: Tomodachi94
- Link to discussions justifying the request
- This was mentioned in the property proposal.
- Task description
Import various identifiers from Humanities Commons member ID (P11780), such as ORCID iD (P496), Twitter username (P2002), Mastodon address (P4033), and other values.
- Licence of data to import (if relevant)
Unknown.
- Discussion
- Request process
Request for bot to mass-add descriptions to items lacking one (2023-06-14) Edit
Request date: 14 June 2023, by: Urban Versis 32
- Link to discussions justifying the request
- Task description
UrbanBot's task is to mass-add English descriptions to items that don't have one.
The bot operates in the following process (bot operator is me):
1. The bot operator will first enter a category name from the English Wikipedia. This category will be used to group similar pages (items on Wikidata) which will all have the same description added to them.
2. The bot operator will enter the description to be added to the pages in the Wikipedia category.
3. The bot will follow through these steps for each page:
3a. The bot will check if the Wikipedia page has a corresponding item.
3b. The bot will check if the item already has a description
3c. If the Wikipedia page has a corresponding item and the item does not already have a description, the bot will write the description specified by the bot operator in step 2 into the item.
3d. The bot will loop through to the next page in the category and run all steps in step 3 again.
Note: Due to the bot requiring the bot operator to enter in the English Wikipedia category and the description for the items, the bot is semi-automated.
Source code: Main repository for UrbanBot's code Source code file for task
- Discussion
- Request process
Request to add statements for year in motorsport for country categories (2023-07-04) Edit
Request date: 5 July 2023, by: Lights and freedom
- Task description
This is a request to add the following statements for all year in motorsport for a country categories:
- follows (P155) - the previous year in motorsport category for that country
- followed by (P156) - the following year in motorsport category for that country
- category combines topics (P971) - year (if not already present) and motorsport in country (if not already present; some countries use the word "motorsport", others use "motorsports").
For example, see history of the item "Category:2018 in Dutch motorsport".
No urgency, this is just something that should be done.
- Licence of data to import (if relevant)
- Discussion
I have written the following query to find items lacking followed by (P156). However, I wonder whether anybody actually uses followed by (P156) statements.
SELECT ?a ?b ?aLabel ?bLabel WHERE {
?y wdt:P31 wd:Q3186692.
hint:Prior hint:runFirst true.
?y ^wdt:P971 ?a.
?a wdt:P31 wd:Q4167836.
MINUS {?a wdt:P156 ?an}.
MINUS {?a wdt:P971 ?as. FILTER(?as != ?x && ?as != ?y).}.
?y wdt:P156 ?yy.
BIND(wd:Q10589714 AS ?x). # Modify this to another topic if finished
?a wdt:P971 ?x.
?b wdt:P971 ?x.
?yy ^wdt:P971 ?b.
?b wdt:P31 wd:Q4167836.
MINUS {?b wdt:P155 ?bn}.
MINUS {?b wdt:P971 ?bs. FILTER(?bs != ?x && ?bs != ?yy).}.
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
} LIMIT 100
Midleading (talk) 08:38, 17 September 2023 (UTC)
- Request process
Request to changing Turkish characters from English labels (2023-07-15) Edit
Request date: 15 July 2023, by: Devrim ilhan
- Link to discussions justifying the request
- Task description
The following letters are not found on the English keyboard and alphabet: ğ, ç, ş, ü, ö, ı.
These should be replaced with the following letters:
ğ > g
ç > c
ş > s
ü > u
ö > o
ı > i
Ğ > G
Ç > C
Ş > S
Ü > U
Ö > O
I > I (same)
- Licence of data to import (if relevant)
- Discussion
- Oppose Why? w:en:Recep Tayyip Erdoğan and w:en:Kemal Kılıçdaroğlu (to take two random examples) are spelled with some of those special characters, why should we have different labels at Wikidata? --Emu (talk) 18:40, 15 July 2023 (UTC)
- The two alphabets are different from each other. When translating, either transcription or transliteration is required. It cannot be written the same way. There is also a mistake in the English Wikipedia. Devrim ilhan (talk) 19:12, 15 July 2023 (UTC)
Comment While I do not think we should change all labels as requested by Devrim, I believe we should add aliases to these items with the non-diacritic letter equivalents ... it should make finding those items easier. Jonathan Groß (talk) 18:38, 20 July 2023 (UTC)
- How do you type them with English keyboard? Can you send a video? --Devrim ilhan (talk) 05:56, 21 July 2023 (UTC)
- Trump's letter to Erdogan: http://archive.today/1KFxC . Where is ğ? --Devrim ilhan (talk) 10:07, 24 July 2023 (UTC)
- In my opinion, there is no need to change the correct spelling of Turkish names in English labels just because some characters are not present on an English keyboard. Most search engines manage fine to produce "Erdoğan" as a result even when you typed "Erdogan" in the search bar. Wikidata's internal search does the same. So it would be sufficient to define spellings like "Erdogan" as aliases.
Jonathan Groß (talk) 16:27, 24 July 2023 (UTC)
- Oppose From my experience with Catalan, which also has characters not found in English, the search box in Wikidata and most boxes in applications work fine with the English ones, so no need to change labels. Additionally, if it were found that for some applications those characters are a problem, the spelling with English characters only should be added as an alias, never as a label.--Pere prlpz (talk) 16:24, 19 August 2023 (UTC)
- As Bayrakları As Herkes Türkçe karakterlerin kullanılmasını savunuyor. İngilizce yazmaya da gerek yok o zaman. Yedi cihan Türk olmuş zira! --Devrim ilhan (talk) 16:55, 19 August 2023 (UTC)
- Request process
Request to add educated at for people with MGP ID. (2023-08-11) Edit
Request date: 11 August 2023, by: Sharouser
- Link to discussions justifying the request
- Task description
There are many people who have MGP ID and descriptions but do not have value of the educated at property. If you search "Julius-Maximilians-Universität Würzburg" -Q161976, you can find such examples. Sharouser (talk) 11:59, 11 August 2023 (UTC)
- Licence of data to import (if relevant)
- Discussion
- Request process