Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2023/12.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. | |
Restrict webp upload?[edit]
https://commons.wikimedia.org/w/index.php?sort=create_timestamp_desc&search=filemime%3Awebp
i suggest restricting upload of webp files to autopatrol users (like mp3), because very often webp uploads are copyvio taken from the internet or previews of svg logos. RZuo (talk) 14:07, 22 November 2023 (UTC)
- Support Currently I would say 90% of WEBP files are copyright violations. Yann (talk) 15:19, 22 November 2023 (UTC)
- Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 15:30, 22 November 2023 (UTC)
- Support The vast majority of webp files uploaded here are copyvios. Exceptions for individual users are easy to add to an edit filter. Pi.1415926535 (talk) 20:05, 22 November 2023 (UTC)
- Support.--Vulcan❯❯❯Sphere! 08:23, 23 November 2023 (UTC)
- Support Per Pi.1415926535. --Adamant1 (talk) 08:28, 23 November 2023 (UTC)
- Support Seems ok to me. If we ever run into real problems with such a policy we can modify it. --Rosenzweig τ 08:53, 23 November 2023 (UTC)
- Support I wonder why we still have no such restriction. Юрий Д.К 11:56, 23 November 2023 (UTC)
- Support A lot of WEBP files I see when I check files are copyvios. Abzeronow (talk) 16:09, 23 November 2023 (UTC)
- Support I don't see how this could go wrong; this would definitely reduce copyright violations. 20 upper 08:07, 25 November 2023 (UTC)
- Support per Yann's claim. I want to encourage good uploads, but Commons must also guard against copyvios. Recent proposals have an appropriate aim of reducing copyvios and patrolers' workload. The balance here favors restriction. Glrx (talk) 18:56, 25 November 2023 (UTC)
- Strong support second in motion to @Yann, Abzeronow, and Glrx: et.al.. Examples of my autogenerated messages of WEBP copyvios: this, this, and this. And I can still remember the very first WEBP file I encountered here, which is a copyvio itself! Commons:Deletion requests/File:Beijing Skyline.webp. JWilz12345 (Talk|Contrib's.) 08:17, 26 November 2023 (UTC)
- Support Would reduce copyvios for sure; I'm not sure the proportion is as high as some have mentioned based on spot checking, but I usually check the ones that look obvious so it's not exactly a random sample. Gnomingstuff (talk) 23:05, 29 November 2023 (UTC)
- Oppose I think in general, discriminating on filetype is a bad direction (same with mp3). It further complicates and obfuscates the upload process and doesn't stop copyright violations, it stops contributors. Most of these can easily be spotted by filtering the upload list on new contributors. Or we can just ban SVGs as well, because most logos are copyvios. —TheDJ (talk • contribs) 18:46, 30 November 2023 (UTC)
- If we would have enough people checking the unpatrolled uploads we would not need such filters. Unfortunately we do not have enough people checking uploads and edits and therefore need tools to reduce the workload. GPSLeo (talk) 19:31, 30 November 2023 (UTC)
- I think that creating these kinds of non-transparent and highly confusing roadbumps is part of the reason WHY we don't have enough people. That's my point. And I note that just two posts below this we already have someone getting tripped up with the SVG translator software because of a similar rule #File overwriting filter blocks SVG Translate. It's one of those 'a small cut doesn't seem so bad, until they are a thousand cuts"-kind of problems. Considering how much ppl complain about UploadWizard, stuff like this isn't helping lower the barrier to entry either. —TheDJ (talk • contribs) 11:07, 9 December 2023 (UTC)
- Plus we could just make patrolling itself easier by having uploads sorted per date, a single patroller can simple take a few minutes to patrol all new ".webm" files. Do this for every file type and we don't need to exclude people from uploading. If a patroller only wants to patrol videos, sounds, PDF's, Etc. they now have to go through all uploads, but by making it easy to filter out and making these pages easily accessible to everyone and transparent (like OgreBot's Uploads by new users) we could easily patrol everything with fewer people. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 11:55, 9 December 2023 (UTC)
- I think that creating these kinds of non-transparent and highly confusing roadbumps is part of the reason WHY we don't have enough people. That's my point. And I note that just two posts below this we already have someone getting tripped up with the SVG translator software because of a similar rule #File overwriting filter blocks SVG Translate. It's one of those 'a small cut doesn't seem so bad, until they are a thousand cuts"-kind of problems. Considering how much ppl complain about UploadWizard, stuff like this isn't helping lower the barrier to entry either. —TheDJ (talk • contribs) 11:07, 9 December 2023 (UTC)
- If we would have enough people checking the unpatrolled uploads we would not need such filters. Unfortunately we do not have enough people checking uploads and edits and therefore need tools to reduce the workload. GPSLeo (talk) 19:31, 30 November 2023 (UTC)
- Support. Very few cameras or image editing tools output WebP images; when one is uploaded, it's almost always because it was downloaded from a web site which automatically converts images to that format for display (and, thus, is very likely to be a copyright violation). We already have abuse filters which block other types of uploads from new users which are overwhelmingly likely to be problematic, like MP3 files (Special:AbuseFilter/192), PDFs (Special:AbuseFilter/281), and small JPEGs (Special:AbuseFilter/156). Omphalographer (talk) 04:25, 3 December 2023 (UTC)
- Oppose, per TheDJ. Additionally, this would exclude a lot of people who contribute to other Wikimedia websites but aren't necessarily active here, a user could be a trusted user, an admin, or a prolific contributor, Etc. on another Wikimedia website and "a noob" at the Wikimedia Commons. They could have good knowledge of how video files work and which ones are and aren't free, but they will find that they can't upload anything here. If we keep making the Wikimedia Commons more exclusive we will fail at our core mission to be for all Wikimedians. If new users are more likely to have bad uploads then we should have a page like "Commons:Uploads by unpatrolled users by filetype/.webm/2023/12/09" (which includes all users who aren't auto-patrolled), this will simply exclude too many people. We can't know which people and uploads we exclude because a user with a free video file will come here, attempt to upload, see "You have insufficient privileges for this action", and then never return (without ever telling anyone what (s)he wanted to upload and why they didn't). "Anyone can contribute to" is the core of every Wikimedia website, the moment you compromise this you lose whatever made this place Wikimedia. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 11:49, 9 December 2023 (UTC)
- Strong oppose, outlawing a file format will just lead to such files being converted into a different format, and be uploaded in a different way - but now with less possibilities to scan and patrol for it. This is classic prohibition: By outlawing X, users of X will find new ways to still do it, but in places where it can no longer be observed easily. I'm not even arguing in favor of the allegedly "just" 10% .webp images that are in fact okay, which is a valid concern as well in my opinion. So: Use this helpful file format to scan more efficiently for copyvios, rather than outlaw it and have the copyvios enter Commons nonetheless but via still uncharted routes. --Enyavar (talk) 15:25, 18 December 2023 (UTC)
- Comment Giving that WebP files are essentially Google replacements of JPGs, PNGs, and GIFs, we cannot restrict the WEBP uploads into autopatrol users until we restrict the uploads of these three formats too (as well as SVG, even for own uploads), because if a non-patrolled users restricted their WEBP uploads, they would easily convert these webp files to PNG or JPG as a way to upload these images into Commons. We should find a way to close the loopholes of new users to convert webp files to a different image format before we can restrict the WEBP uploads to users with autopatrol rights, even with its own user's webp uploads. Yayan550 (talk) 15:33, 2 January 2024 (UTC)
- @Yayan550: I think you are missing the point here. Of course if they know what they are doing they can convert the file. The idea here is sort of a "speed bump" for a pattern that usually indicates someone who is ignorantly uploading a copyright violation. - Jmabel ! talk 19:24, 2 January 2024 (UTC)
- Precisely. And, as I noted above, we already have AbuseFilter "speed bumps" for other types of uploads, like MP3 files, which are particularly likely to be copyvios. We're aware that users can bypass the filter and upload those files after conversion, but we can explain why an upload is being blocked in the AbuseFilter message (cf. MediaWiki:abusefilter-warning-mp3), and we can review the filter logs to see if users are deliberately bypassing the filter for infringing content. Omphalographer (talk) 21:24, 9 January 2024 (UTC)
- @Yayan550: I think you are missing the point here. Of course if they know what they are doing they can convert the file. The idea here is sort of a "speed bump" for a pattern that usually indicates someone who is ignorantly uploading a copyright violation. - Jmabel ! talk 19:24, 2 January 2024 (UTC)
- Support Infrogmation of New Orleans (talk) 20:01, 9 January 2024 (UTC)
- Support The issue seems similar to MP3 files. It's about a practical approach based on experience. No one, I assume, has anything against MP3 or WEBP as file types in principle, but it's just a matter of fact that Commons uploads of these file types tend to be copyvios more often than others, so a measure similar to the MP3 upload restriction already in place seems only sensible. The proposal is not about "outlawing" the format but restricting it to autopatrol users. Gestumblindi (talk) 14:22, 14 January 2024 (UTC)
Disabling talk pages of deletion requests[edit]
While there now exists Template:Editnotices/Group/Commons talk:Deletion requests that notifies users to make comments on the deletion request pages themselves, it is evidently ignored, as seen in 54conphotos' comments on the talk page of Commons:Deletion requests/File:KORWARM2.jpg (which I transferred to the main page and in Amustard's comment on a Turkmen deletion request which I subsequently transferred to the mainspace. As it is very evident that the edit notice is being ignored, I am proposing that the "Talk" namespace be disabled in all pages with prefix "Commons:Deletion requests/". This should be a permanent solution to the incidents that should have been better avoided. For existing talk pages of deletion requests with comments, the comments (including mine if ever I had responded to uploaders in the talk page namespaces) should be transferred to the deletion requests mainspaces, with consideration to the dates of the comments or inputs. JWilz12345 (Talk|Contrib's.) 09:10, 26 November 2023 (UTC)
- Support At least, the use of DR talk pages should restricted to power users (admins, license reviewers?). Yann (talk) 09:37, 26 November 2023 (UTC)
- @Yann that may be OK. Restricted to admins and license reviewers. Or the talk pages are still existing visually but those who don't have user rights, even autopatrolled ones, will be barred from editing talk pages and be presented with a boilerplate notice that they don't have the right to edit talk pages and should instead comment on the main discussion page, with a link to the DR itself in the notice (do not expect several new users to comprehend what they are reading in the notices). JWilz12345 (Talk|Contrib's.) 10:09, 26 November 2023 (UTC)
- Support --Krd 11:23, 26 November 2023 (UTC)
- Support Christian Ferrer (talk) 11:56, 26 November 2023 (UTC)
- Thank you for pointing out this Template:Editnotices/Group/Commons talk:Deletion requests location in Wikimedia. This was not ignored as you said in your comment, it simply was no where to be found at the time I commented. It's a shame it's too late to place a comment there as I would have done so. Even your notes to me are very confusing as the names of Comments pages do not match up so I can find them as are all the previous notes received by others. Being new to this platform, I have found it very confusing to find things that are suggested when seeing comments by others.
- Hopefully I will have the hours to research and better understanding of the workings of Wikimedia Commons in the future. Thanks again! 54conphotos (talk) 13:32, 26 November 2023 (UTC)
- Support or, if it's easier, systematically turn them into redirects to the relevant project page. - Jmabel ! talk 21:56, 26 November 2023 (UTC)
- Support --Adamant1 (talk) 00:35, 27 November 2023 (UTC)
- Support. Some good ideas above from Yann and Jmabel. We could also explore autotranscluding them to the bottoms of the DR subpages themselves. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 00:49, 27 November 2023 (UTC)
- Support. Yes, good idea, esp. with Jmabel’s and Yann’s additions. -- Tuválkin ✉ ✇ 11:34, 27 November 2023 (UTC)
- Support to restrict it to anyone with
autopatrol
, I think these users are knowledgeable enough to know that the talk page isn't to discuss the deletion. We must create an informal and easy-to-understand AF notice though. -- CptViraj (talk) 12:19, 9 December 2023 (UTC)- Another one, this misplaced comment by ApexDynamo, which I have transferred to the main nomination pages. CptViraj, I don't think even autopatrolled users are still knowledgeable enough to know that talk pages are not proper forums to comment. Example: misplaced comments by Exec8 (which I also transferred soon after initiating this proposal). I suggest the use of those talk pages must be restricted to admins/sysops and license reviewers. JWilz12345 (Talk|Contrib's.) 09:38, 14 December 2023 (UTC)
- Support I have never used a talk page of a DR nor have I seen one being used. The DRs are usually also frequented by very few editors and the comments can easily be distinguished from one another.Paradise Chronicle (talk) 22:13, 30 December 2023 (UTC)
- One more problematic use, by @Balachon77: (see this). JWilz12345 (Talk|Contrib's.) 01:00, 8 January 2024 (UTC)
- Another problematic use, by SiCebuTransmissionCorrecter (talk · contribs) – Commons talk:Deletion requests/File:Line construction of Hermosa-San Jose Transmission Line. The line constructs above Hermosa-Duhat-Balintawak transmission line.png. JWilz12345 (Talk|Contrib's.) 00:10, 9 January 2024 (UTC)
- no no no no no no! SiCebuTransmissionCorrecter (talk) 01:12, 10 January 2024 (UTC)
- Another problematic use, by SiCebuTransmissionCorrecter (talk · contribs) – Commons talk:Deletion requests/File:Line construction of Hermosa-San Jose Transmission Line. The line constructs above Hermosa-Duhat-Balintawak transmission line.png. JWilz12345 (Talk|Contrib's.) 00:10, 9 January 2024 (UTC)
- One more problematic use, by @Balachon77: (see this). JWilz12345 (Talk|Contrib's.) 01:00, 8 January 2024 (UTC)
Low quality AI media deletion system[edit]
I have seen a few AI generated images on Wikimedia a I am concerned that the rate of new low quality AI generated images will overtake non AI images on Wikimedia commons (it requires a much less effort to generate a image with AI then to make a non AI image) I am proposing a system to prevent this: any AI images that a Wikimedia editor deems "low quality" will be discussed and voted upon. if it is deemed low quality and therefor unlikely to be useful compared with normal images it may be deleted. I think this system will help prevent a flood of low quality AI images from disrupting Wikimedia (wich i think is a Possibility based on how easy it is to make AI images, In the time it took to write this I could have made more then 36 AI images) 50tr5 (talk) 03:07, 13 December 2023 (UTC)
- @50tr5: I'm not sure "low-quality" is so easily defined, but I encourage you to participate in Commons:AI-generated media and its talk page, where we are trying to hash out some sort of guideline or policy on AI-generated media. I, for one, hope we will come up with criteria that allow deletion of most AI-generated media. - Jmabel ! talk 18:28, 13 December 2023 (UTC)
- "Low quality" is kind of vague and I don't think the project is well served with endless debates over the line is every time someone nominates an image of AI artwork for deletion. Plus the quality shouldn't matter anyway. High quality or not, AI artwork is fundementally at odds with the goals of the project. At least with how it stands now and outside of some extremely rare cases. I like @Gestumblindi: 's suggestion of only allowing for AI artwork that being used on projects though. But with the caviet that unused images should qualify for speedy deletion. Since there's already been enough bad faithed atguing around this already and there shouldn't be a need to relitigate things with every image if we all agree they shouldn't be hosted on Commons to begin with. We could also make an exception for AI artwork that's being used on people's personal pages. Regardless, I think that would be a good middle ground between outright banning it or taking no action at all. Since there's clearly a need to do something about it. --Adamant1 (talk) 18:58, 13 December 2023 (UTC)
- They could also be required to be put in separate categories like "AI-generated XY" or "XY in art" rather than "XY" which already is usually the case. This could be used in the search engine where there could be a toggle button to make it not show any AI-generated images (I think I also supported or proposed a filter toggle for NSFW images like sexually explicit scenes as well as gore to prevent such from showing up in unexpected searches). Creating many AI images of low quality in short time is possible since over a year now yet no such flood has occurred here so the concerns are overstated and low-quality AI-generated images are already frequently deleted and I'd support doing that more often for images in Category:Poor quality AI-generated images as well as implementing a more clearly visible 'Warning: This image is AI-generated template' which could also be used in places where images are displayed without their file descriptions such as the search results. Prototyperspective (talk) 19:04, 13 December 2023 (UTC)
- I've said it other places, but requiring AI artwork to be put in separate "AI-generated XY" or "XY in art" rather than "XY" categories does absolutely nothing to deal with the issue. Just like it didn't help at all to transfer the fake AI artwork of Giovanna IV di Napoli out Category:Giovanna IV di Napoli. The images were still deleted as out of scope fan art regardless. At least there won't be similar, time wasting debates to the one in Commons:Deletion requests/Files in Category:Giovanna IV di Napoli by Bing Image Creator if AI artwork is mostly banned though. Really, the whole thing has clearly been a massive time suck. One that your at the losing end of. So why not meet everyone else half way and just accept that the best option here is to ban it except in cases where the images are being used on other projects. Instead of acting like putting the images in separate categories for AI artwork is some kind of magical, cure all solution to this when it isn't? At the end of the day the images are going to be deleted regardless if they are in separate categories or not. --Adamant1 (talk) 23:01, 13 December 2023 (UTC)
- Because you refuse to take home and address simple rational points like my question about why it would be fan art when COM:Fan art starts with …unofficial artistic representations of elements or characters in an original work of fiction – what's the original work of fiction? It's not fan art since it is depicting a real historical character in interesting sceneries. Like in the prior debate you keep on making claims without reasons / explanation such as that it would to nothing to deal with the issue without explaining why when I explained explicitly how exactly it would deal with the issue. AI art has tremendous potential, it's like banning all images made or modified with Photoshop...you really think that is a good idea? Separate categories aren't needed either. Prototyperspective (talk) 23:11, 13 December 2023 (UTC)
- It's nothing like banning all images modified with Photoshop. That's the problem with your approach to this. You use completely ridiculous comparisons that absolutely nothing at all to do with the topic and then act like it's a valid point that people are just unwilling to address when they don't indulge in your side meaningless side tangents. At least have an argument that's relevant to the topic. Saying banning AI artwork would be like doing the same for images edited in MS Paint is just laughable on it's face though. Plus it has absolutely nothing to do with this anyway. You act like we shouldn't have any standards what-so-ever just because people modify images in photo editing apps. It's not a serious argument. --Adamant1 (talk) 23:26, 13 December 2023 (UTC)
- just laughable on it's face your approach right alongside calling me Your probably one of those people who think Bitcoin is going to replace fiat currency any second now to aren't you? Lmao.. Your premise is that AI art is useless, we can argue whether or not it is as useful as Photoshop, I think it could be more useful since eventually you could create a high-quality image of anything you can imagine – but that it can be useful is enough of a reason to not ban it based on your unsubstantiated assumptions and quite clear anti AI bias. I won't continue this debate with you from now on. It just puts walls of text where there should be actual arguments and actual addressing of points. It's a tool usable for so many purposes and free media gaps. Prototyperspective (talk) 23:41, 13 December 2023 (UTC)
- it can be useful is enough of a reason to not ban it based on your unsubstantiated assumptions and quite clear anti AI bias. It seems like essentially everyone else disagrees with you. But all you do is chalk any minor disagreement with your opinions up to ignorance or anti AI bias.
- just laughable on it's face your approach right alongside calling me Your probably one of those people who think Bitcoin is going to replace fiat currency any second now to aren't you? Lmao.. Your premise is that AI art is useless, we can argue whether or not it is as useful as Photoshop, I think it could be more useful since eventually you could create a high-quality image of anything you can imagine – but that it can be useful is enough of a reason to not ban it based on your unsubstantiated assumptions and quite clear anti AI bias. I won't continue this debate with you from now on. It just puts walls of text where there should be actual arguments and actual addressing of points. It's a tool usable for so many purposes and free media gaps. Prototyperspective (talk) 23:41, 13 December 2023 (UTC)
- It's nothing like banning all images modified with Photoshop. That's the problem with your approach to this. You use completely ridiculous comparisons that absolutely nothing at all to do with the topic and then act like it's a valid point that people are just unwilling to address when they don't indulge in your side meaningless side tangents. At least have an argument that's relevant to the topic. Saying banning AI artwork would be like doing the same for images edited in MS Paint is just laughable on it's face though. Plus it has absolutely nothing to do with this anyway. You act like we shouldn't have any standards what-so-ever just because people modify images in photo editing apps. It's not a serious argument. --Adamant1 (talk) 23:26, 13 December 2023 (UTC)
- Because you refuse to take home and address simple rational points like my question about why it would be fan art when COM:Fan art starts with …unofficial artistic representations of elements or characters in an original work of fiction – what's the original work of fiction? It's not fan art since it is depicting a real historical character in interesting sceneries. Like in the prior debate you keep on making claims without reasons / explanation such as that it would to nothing to deal with the issue without explaining why when I explained explicitly how exactly it would deal with the issue. AI art has tremendous potential, it's like banning all images made or modified with Photoshop...you really think that is a good idea? Separate categories aren't needed either. Prototyperspective (talk) 23:11, 13 December 2023 (UTC)
- I've said it other places, but requiring AI artwork to be put in separate "AI-generated XY" or "XY in art" rather than "XY" categories does absolutely nothing to deal with the issue. Just like it didn't help at all to transfer the fake AI artwork of Giovanna IV di Napoli out Category:Giovanna IV di Napoli. The images were still deleted as out of scope fan art regardless. At least there won't be similar, time wasting debates to the one in Commons:Deletion requests/Files in Category:Giovanna IV di Napoli by Bing Image Creator if AI artwork is mostly banned though. Really, the whole thing has clearly been a massive time suck. One that your at the losing end of. So why not meet everyone else half way and just accept that the best option here is to ban it except in cases where the images are being used on other projects. Instead of acting like putting the images in separate categories for AI artwork is some kind of magical, cure all solution to this when it isn't? At the end of the day the images are going to be deleted regardless if they are in separate categories or not. --Adamant1 (talk) 23:01, 13 December 2023 (UTC)
- Your premise is that AI art is useless That must why I've said multiple times now that we should keep images that are being used on other projects. Just like how you've repeatedly treated me like I have no experience with AI artwork when I've said at least 4 times that I have a Flickr account that I upload images from Dall-E to. All you do is box ghosts. Apparently your just that incapable of discussing this in an honest, good faithed way. Be my guest and don't debate it with me anymore though. I'm sick of repeating myself over and over about things that your either incapable or unwilling of getting anyway. So I could really care less. I'm sure cooler heads then yours will ultimately prevail. --Adamant1 (talk) 00:02, 14 December 2023 (UTC)
- AI art does have potential but it has not reached a stage (at least with the AIs normal internet users can get) where it can reliably pump out images that look like something someone would use in like say a presentation (compared with normal images) 50tr5 (talk) 15:54, 14 December 2023 (UTC)
- Agree. Prototyperspective (talk) 16:06, 14 December 2023 (UTC)
- I don´t know where you got that info from,(you don´t give any sources, like the other guy) but some of the stuff out there is convincing enough, that I can only identify a fake by it´s context. Anything with sex and celebrities - most likely AI, but grainy presumably "historical" photographs in lower res on flickr ? I can´t tell. Most cheesy artifacts and imperfections in a high res AI work would disappear when scaling it down and reducing colors. Alexpl (talk) 19:05, 14 December 2023 (UTC)
- Check out Fooocus. I've been using it for a minute now and it produces some pretty crazy accurate stuff. The discussion of quality is really just a re-hearing though. At least it is in most cases, since AI artwork is OOS fan-fiction at best in most (if not all) cases anyway. But there's no reason that we need AI artwork in the first place with the rare instances where it is. Like an image of a cat. There's already millions of freely available images of cats out there. So there's no reason to bother with an AI generated image of a cat to begin with.
- The only instance where it might be useful is with images of objects or people where there is no freely available alternatives. But it's already been established that it falls flat in those instances. The AI artwork of Giovanna IV di Napoli being one example, but I'm sure there's many others. It would be totally ridiculous to allow for say AI generated images of old west towns where there's no publicly available, free images of them. So realistically what's the actual point in allowing for AI artwork to begin with? Like what's the actual benefit to the project outside of nonsense claims that the technology is cool and anyone who disagree just hates AI? Because at the end of the day we seem to be doing perfectly fine without it. And a few people can throw tantrums over it being banned, but there's no reason not to ban it if there isn't even a valid use case for it to begin with. --Adamant1 (talk) 23:14, 14 December 2023 (UTC)
- Well now some argue that it's too inaccurate and low-quality while others argue it's too high-quality/-resolution/accurate. There's already millions of freely available images of cats out there. So there's no reason to bother with an AI generated image of a cat Agree if it's just a mundane cat without any unique noteworthy features worthy of inclusion such as a realistic cyborg cat, a cat according to some mythological tale, an image showing an anthropomorphic cat society, or a fictional cat breed or the cat being not the only main content of the image. Saying "it shows a cat" is not enough, it should be "it merely shows a cat without adding much". While no such restrictions are in place for other media, I wouldn't object to setting some reasonable requirements if people think that would make things better. The only instance where It can also be higher quality for subjects for which one or a few images are available, it doesn't mean these need to be used but it can be good to have them. Moreover, it's probably not a good approach to just assume you have thought of every constructive application. I already listed many use-cases why do I need to repeat over and over. From illustrating what an art style looks like to depicting a concept or a subject on fiction/art. From sustainable city design to art movement/aesthetics, extinct animal species and scientific concepts. It's a tool for countless purposes and I've been explicit and exhaustive in exemplary valuable applications I explained. Prototyperspective (talk) 23:32, 14 December 2023 (UTC)
- The only instance where it might be useful is with images of objects or people where there is no freely available alternatives. But it's already been established that it falls flat in those instances. The AI artwork of Giovanna IV di Napoli being one example, but I'm sure there's many others. It would be totally ridiculous to allow for say AI generated images of old west towns where there's no publicly available, free images of them. So realistically what's the actual point in allowing for AI artwork to begin with? Like what's the actual benefit to the project outside of nonsense claims that the technology is cool and anyone who disagree just hates AI? Because at the end of the day we seem to be doing perfectly fine without it. And a few people can throw tantrums over it being banned, but there's no reason not to ban it if there isn't even a valid use case for it to begin with. --Adamant1 (talk) 23:14, 14 December 2023 (UTC)
- Most AI images are out of scope. I would encourage you to nominate any that aren't in use and don't seem likely to be used for deletion per COM:EDUSE. The Squirrel Conspiracy (talk) 23:24, 14 December 2023 (UTC)
- Thanks to Nosferattus, Jmabel et al for Commons:AI-generated media discussion. Agree this has been a problem - there have been some genuinely low quality "AI" images uploaded (eg supposed portraits of people with body parts in incorrect places); there have also been IMO too many uploads of images which superficially look good aesthetically, but are not useful as educational material for in scope subjects. Some uploaders of such material seem indignant that others on Commons object - but just being an AI image, even a significantly better than average AI image, does not in itself mean it belongs on Commons. There are many other websites online for sharing images. Commons is not social media nor personal artwork sharing site. -- Infrogmation of New Orleans (talk) 23:56, 14 December 2023 (UTC)
Comment Here's an alternative proposal that, in my opinion, achieves similar goals while minimizing user intervention. 1. Automate AI image deletion: Delete unneeded AI images after a specified period of inactivity (e.g., 3 months) from projects. This declutters the Commons without requiring manual intervention. 2. Standardize filenames: Rename remaining AI images to a consistent format like AI_Image_Filename.jpg. This helps users searching by filename easily identify and avoid unintentionally using them in active projects (and they can still use them if AI serves their purpose). AI images require minimal effort to generate. Leaving unused ones clutters the Commons and can lead to unintentional use. Anyone needing a similar image can readily create a new one with current AI tools. I believe this approach balances practicality and potential misuse of AI-generated content. What do you think? Rkieferbaum (talk) 00:58, 16 December 2023 (UTC)
- @Rkieferbaum: sorry, but I don't like it at all. Among other things, I'm extremely opposed to introducing automated deletion. I do not trust a bot-based rule to decide what is "unneeded". For a simple example, but one that I think is sufficient to show the problem, consider an image where there has been some back-and-forth as to whether to use it in a Wikipedia article. There is no reasonable way for a bot to be able to tell that has been going on. Similarly, a bot cannot determine that a particular image is very likely to be used outside of WMF projects. And (unless it's by human tagging) a bot cannot tell that an image is part of a set intended to track the evolving behavior of a particular generative AI tool over time.
- I'd be open to the naming thing, but I don't necessarily think it is a good idea. I'm always hesitant to rely on any meaning to filenames. Files get renamed. Properties should be tracked with templates, categories, and SDC, not filenames. - Jmabel ! talk 04:04, 16 December 2023 (UTC)
- +1 The AI nature of a file has to be clear without relying on people choosing the correct category. Let´s make an "AI" in the filename mandatory. Alexpl (talk) 05:52, 16 December 2023 (UTC)
- +1 Agree with Jmabel and strongly oppose the proposal #1 but would support the #2 if people think that would be useful and no better option is found and feasible. A better option could be only displaying a tag 'AI image' on the thumbnail or adding that (or e.g. 'Made using AI') to the filename without the file having to be named so. Prototyperspective (talk) 13:04, 16 December 2023 (UTC)
- +1 to the second proposal. I suggested something similar in the last discussion. Although it's a toss up for me between a template that can be added to files or requiring the file names indicate the image was created by an AI generator. Really, both approaches have their pros and cons. I'm against any kind of automated deletion process though. But the two proposals aren't mutually exclusive either. The best option is probably Gestumblindi's proposal in conjunction with having a way to indicate the files are generated by AI, however its ultimately implemented. --Adamant1 (talk) 16:00, 16 December 2023 (UTC)
- The deletion being automated isn't as important as avoiding a scenario where each image has to be examined by multiple users to be deemed low quality. I think we should avoid at all costs a situation where each single low quality image takes a few seconds to generate and upload but consumes a hundred times that to be deleted. An alternative to the automated process would be allowing something like PROD in such cases. Unlike with regular uploads, with AI images the "burden of proof" of usefulness should lie with whoever thinks they should be kept. As for the file naming, I think it's important that the name itself be changed rather than the alternatives, because as far as I can tell anything else wouldn't be as conspicuous and could easily be missed within the other platforms. Rkieferbaum (talk) 02:00, 17 December 2023 (UTC)
I'm against any kind of automated deletion process. However, it is an issue that there are potentially countless AI-generated images of little value. I therefore still stand by my proposal from the recent VP discussion to apply stricter criteria for AI images than COM:EDUSE would usually provide, and only accept images that are in use in Wikimedia projects. Those, however, we must accept (unless there is a tangible copyright / derivative work concern for a specific file) per COM:INUSE. Delete the rest - but not with an automated process. Gestumblindi (talk) 12:01, 16 December 2023 (UTC)
- By hand? That seems somewhat irresponsible, given the possibility of easy mass-uploads via flickr. At least put a timer on them - AI stuff is deleted unless it is in use 2 weeks after the originals files first upload. Alexpl (talk) 12:00, 18 December 2023 (UTC)
- Just prohibit mass-uploads of AI art or take action for users that do so. Such a hypothetical problem has not occurred so far and there are many very large problems with auto-deletions. Prototyperspective (talk) 12:17, 18 December 2023 (UTC)
- I'm not sure the auto-delete idea is feasible. We don't even have an in-house system for deleting files that are obvious copyright violations taken from online. We rely on third-party reverse image searches and human evaluation. Even for people who have tried to do this kind of screening — and who are much larger, more innovative, and more well funded than the WMF — the results are pretty mixed. It took teenagers about zero seconds to find ways around YouTube's detection, and they're part of the third largest tech company in the world, while the system has also resulted in high profile copyright strikes against content creators who have good claims to fair use.For AI generated images, I'm not sure any program at all exists to tell the difference, not even a crappy one. You're essentially relying on a self-reporting system by uploaders, one which many will stop doing the moment they realize it's an auto-delete. That's not counting undeleting false positives, or situations where the uploader themselves doesn't realize it's AI. GMGtalk 12:19, 18 December 2023 (UTC)
- Is this a preventative solution searching for a non-existing problem? Mass uploads in the sense of "AI-generated spam" are not okay, I grant that. But I disagree on a general prohibition of AI-generated images, or even on the insistence that they must be used ("or else"). We already have sufficient criteria which images to delete: Commons:SCOPE. AI images will need to be clearly marked as such, and I agree with Gestumblindi's suggestion to look closer when assessing the quality, and delete more readily than we would delete regular artwork made by humans. I don't yet see spammers mass-uploading low-quality AI images, and if those were to appear, we have regular deletion mechanisms. In fact, we don't yet use our deletion mechanisms enough for spammers mass-uploading low-quality regular images. Finally, I expect contributors who are using "good practice" and tag their AI images (of probably higher quality, because they care?) will get punished more by any general prohibition of AI than contributors who don't announce that they used AI, and thus fly under all radars we're trying to establish here. --Enyavar (talk) 14:57, 18 December 2023 (UTC)
- I don't yet see spammers mass-uploading low-quality AI images - I don't know about spammers, but we have absolutely had issues with users uploading large groups of low-quality AI images; a couple examples are Commons:Deletion requests/Files uploaded by David S. Soriano, Commons:Deletion requests/Files uploaded by Chromjunuor, Commons:Deletion requests/Files uploaded by Thomas Nordwest, etc. Omphalographer (talk) 15:36, 18 December 2023 (UTC)
- Agree with what Enyavar said.
- The examples you named are good examples that current practices are well suited to deal with that and there aren't many more comparable large-scale uploads. The images by Thomas Nordwest were high-quality and should not have been deleted indiscriminately. It included excessive rather uneducational images of realistic overgrown Buddha head statues but it also included lots of high-quality images for subjects/categories for which there were zero or until that time no high-quality images such as Category:Anachronism in art and fiction. Prototyperspective (talk) 16:07, 18 December 2023 (UTC)
- There's maybe a difference between "searching for a problem" and being proactive. There's not much of an argument that AI isn't going to continue to be widely used and won't likely get better over time. Even now, it's quite likely that someone using an upload script from something like Fickr could inadvertently upload tons of AI images without even knowing. You find something that appears authentic, isn't obviously taken from online, and appears to have an appropriate license. That's the assume-good-faith scenario, which alone could be a bit of a problem. GMGtalk 15:44, 18 December 2023 (UTC)
- I don't yet see spammers mass-uploading low-quality AI images - I don't know about spammers, but we have absolutely had issues with users uploading large groups of low-quality AI images; a couple examples are Commons:Deletion requests/Files uploaded by David S. Soriano, Commons:Deletion requests/Files uploaded by Chromjunuor, Commons:Deletion requests/Files uploaded by Thomas Nordwest, etc. Omphalographer (talk) 15:36, 18 December 2023 (UTC)
- Is this a preventative solution searching for a non-existing problem? Mass uploads in the sense of "AI-generated spam" are not okay, I grant that. But I disagree on a general prohibition of AI-generated images, or even on the insistence that they must be used ("or else"). We already have sufficient criteria which images to delete: Commons:SCOPE. AI images will need to be clearly marked as such, and I agree with Gestumblindi's suggestion to look closer when assessing the quality, and delete more readily than we would delete regular artwork made by humans. I don't yet see spammers mass-uploading low-quality AI images, and if those were to appear, we have regular deletion mechanisms. In fact, we don't yet use our deletion mechanisms enough for spammers mass-uploading low-quality regular images. Finally, I expect contributors who are using "good practice" and tag their AI images (of probably higher quality, because they care?) will get punished more by any general prohibition of AI than contributors who don't announce that they used AI, and thus fly under all radars we're trying to establish here. --Enyavar (talk) 14:57, 18 December 2023 (UTC)
- I don’t think there’s enough of a problem here yet. Most of these cases of AI spam are single users and were purged in a simple mass-nomination. Plus it’s extremely hard for any one thing to overwhelm a collection of over ONE HUNDRED MILLION FILES. I typed in “AI generated” and got about 5000 hits. That’s over 4000 fewer images than you get for “penis”, another widely loathed category of image people want to take all sorts of weird measures against, when in fact it’s hardly overwhelmed all others thanks to plain old manual vigilance. Dronebogus (talk) 19:54, 18 December 2023 (UTC)
- To be fair, the compulsion to share images of your penis is a powerful force, and the number of images is only so low because your average admin has had to sort through a lot of Richards. GMGtalk 20:02, 18 December 2023 (UTC)
- "deleted unless it is in use 2 weeks after the originals files first upload. Alexpl": "In use" means what? Are you saying that the only valid reason AI images can be here is if it is in use on a WMF project, and all other AI images should be deleted? If so, are you saying that it would not be valid to build up a series of images showing what the same product does over time with a given prompt (which seems to me like it would be perfectly in scope). Also: "2 weeks after first upload." Consider two scenarios: (1) an image is uploaded, immediately and appropriately attached to an article and just shy of the two week mark a vandal blanks the article. The bot checks at the two-week mark and deletes the image. (2) a basically useless image is uploaded. The uploader adds it to (say) the Cebuano-language Wikipedia, which is large but, in my experience, very poorly monitored. If they are really sneaky, they add it on day 13, remove it on day 15. They duck the bot. - Jmabel ! talk 20:23, 18 December 2023 (UTC)
- To be fair, the compulsion to share images of your penis is a powerful force, and the number of images is only so low because your average admin has had to sort through a lot of Richards. GMGtalk 20:02, 18 December 2023 (UTC)
Computer generated images used in the contests[edit]
Hello, I have been suggested to open a new topic here *this question was previosly asked in the Help desk. I have read that there is a topic wich discusses about AI images here, but it does not speak about using them in contests. As amateur photographer, i like to join contests. I use my own photographs taken with my Canon camera. I would like to make sure that only our own images taken as "humans" and not generated by AI are participating in the contests. - Is there any one of the admins or moderators who vet the pictures in the contests? - Are we all ensured that no AI Pictures are becoming part of the list of pics that compete in the contest? - What happens if some of us check the pics and see that there are some AI pictures in the contest? - Can we report them or those pics are fully allowed in the competition? (i believe not, but i ask just in case). Wikimedia does not explicitly forbids the usage of AI but i found an implicit statement as you see in "Photo Challenge" page info. It says about "own work, or "pictures taken by a common users", hence here comes my question : Can we set an "explicit" rule instead on the wikimedia commons contest? Thanks for the info that i believe are quite useful to know. Oncewerecolours (talk) 20:30, 20 December 2023 (UTC)
- This amounts to a proposal to block AI images from being entered into contests, and therefore from winning. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 20:36, 20 December 2023 (UTC)
- thansk for taking this in consideration Oncewerecolours (talk) 20:39, 20 December 2023 (UTC)
- @Oncewerecolours: The scope of this proposal seems unclear. First, the title says "computer generated images" but the rest of your text refers to "AI images" and "AI pictures". Which do you intend to forbid? Second, which contests should be covered? You mentioned the Photo Challenge. Other obvious candidates would be the various Wiki Loves contests. Then there are valued images (kind of competitive), Commons:quality images, and Commons:featured pictures. Featured pictures are complicated because while non-competitive they do feed into Picture of the day and Picture of the Year. Would this also affect awards from other projects, like English Wikipedia's featured pictures and picture of the day? --bjh21 (talk) 21:49, 20 December 2023 (UTC)
- Hi, I meant AI images, and all the images that are not "photographs", meaning images taken by an human being instead of generated by any software. This matches with the rules stated in the photo challenge info page. An AI image is not a photograph, I don't think those images should compete in the monthly photo challenges and some like "wiki loves earth", or " monuments" etc...etc...sorry if this wasnt clear! Oncewerecolours (talk) 22:03, 20 December 2023 (UTC)
- My opinion on "Wiki Loves" contests (again per my !vote below, these are merely recommendations to the contest organizers, as I don't think we should have any community-wide regulation on contest rules): Images generated wholly or substantially by AI should not be allowed. Image manipulations, whether done via conventional editing software or AI-enhanced software (e.g. DeNoise AI), are allowed but must not misrepresent the subject. -- King of ♥ ♦ ♣ ♠ 23:03, 20 December 2023 (UTC)
- Yes that is exactly what I meant. Humans take photographs using their cameras (see the symbol in the photo challenge page, a camera...), hence they are the authors. AI software generate images "artificially" , not through human eyes and cameras. Photographes are images that comes , first, from an human eye, not from a AI software. But of course this does not prevent to open separate contests for AI images, if this makes sense, but not for "photographs" part of "wiki loves earth, science, music, cars"....or "monthly photo challenges". That was my point. Nothing prevents to play the game into 2 different fields, AI contest and photographs contest. I simply dont love to see AI images in Monthly challenges, that is it, as they are NOT photographs. My 2 cents. Thanks for follow up to everyone. Oncewerecolours (talk) 10:49, 21 December 2023 (UTC)
- My opinion on "Wiki Loves" contests (again per my !vote below, these are merely recommendations to the contest organizers, as I don't think we should have any community-wide regulation on contest rules): Images generated wholly or substantially by AI should not be allowed. Image manipulations, whether done via conventional editing software or AI-enhanced software (e.g. DeNoise AI), are allowed but must not misrepresent the subject. -- King of ♥ ♦ ♣ ♠ 23:03, 20 December 2023 (UTC)
- Hi, I meant AI images, and all the images that are not "photographs", meaning images taken by an human being instead of generated by any software. This matches with the rules stated in the photo challenge info page. An AI image is not a photograph, I don't think those images should compete in the monthly photo challenges and some like "wiki loves earth", or " monuments" etc...etc...sorry if this wasnt clear! Oncewerecolours (talk) 22:03, 20 December 2023 (UTC)
- It does seems a bit unfair for the person who wakes up early to get a picture of a mountain at sunrise, to have them pitted against somebody who simply typed "mountain at sunrise" a few times until they got a good AI image. It feels like the teenager who uses AI to generate their homework. GMGtalk 14:07, 22 December 2023 (UTC)
Block AI images from being entered into contests, and therefore from winning[edit]
- Support as proposer. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 20:36, 20 December 2023 (UTC)
- Support Seems very reasonable. Gestumblindi (talk) 20:44, 20 December 2023 (UTC)
- Support Yes. Yann (talk) 20:59, 20 December 2023 (UTC)
- NOTE that the proposal here changed after I wrote this. At the time I wrote the following the proposal did not say that AI images were to be barred from "photography contests" but from [presumably all] contests. Yes, of course if a contest is specific to photography, then it's specific to photography! - Jmabel ! talk 06:18, 25 December 2023 (UTC) Oppose Seems to me that this is up to the people who run the contest. I could easily imagine a contest for illustrations of a particular subject-matter area, where AI-generated entries might be entirely appropriate. - Jmabel ! talk 21:10, 20 December 2023 (UTC)
- Hello Jmabel, can we change the name of the topic from "Block AI images from being entered into monthly photo challenges and "Wiki loves" contests? sorry, i should have been more clear. I think that this is the issue: I didn't ask to ban the AI pics from ALL the contests. Thanks again and sorry for misunderstanding. :)
- AI can defo be used in "Best AI IMAGES" or "BEst Computer Generated Pic of the month " etc...etc. I don't have anything against it. Oncewerecolours (talk) 14:15, 24 December 2023 (UTC)
- @Oncewerecolours: I wrote the topic as a simplification based on your earlier work on this subject. I would be willing to add "photography" to form "Block AI images from being entered into photography contests, and therefore from winning", would that be ok with you? More than that, I think we would need a different proposal. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:41, 24 December 2023 (UTC)
- @Jeff G. Of course you did well as i wrote that before and you reported here, but I forgot to add the type of contests...it seems this caused a misunderstanding, I dont have anything againsta AI pics. I just asked a kind of measure to prevent future situations where some AI pics are posted in "Photography contests" like the regular ones mentioned...above. So your proposal seems fine to me.
- Thank you. Oncewerecolours (talk) 18:49, 24 December 2023 (UTC)
- @Oncewerecolours: I wrote the topic as a simplification based on your earlier work on this subject. I would be willing to add "photography" to form "Block AI images from being entered into photography contests, and therefore from winning", would that be ok with you? More than that, I think we would need a different proposal. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:41, 24 December 2023 (UTC)
- @Jmabel: Would it make sense to have a separate proposal specific to photography contests? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:30, 25 December 2023 (UTC)
- @Jeff G.: It seems that is what you've already now done here. Which is fine. As I said in my recent comment, of course it is reasonable to have a contest that is specific to photography. It is possible form Alexpl's remark below that he disagrees, but since he apparently doesn't like being pinged, I'm not pinging him. I was responding to what was written here, not to what someone may have thought but didn't write. - Jmabel ! talk 06:35, 25 December 2023 (UTC)
- Support as a default rule for COM:PC, Oppose as a blanket prohibition. That is, putting my Commoner hat on, I don't think we should regulate the running of individual contests as a community, but putting my PCer hat on, AI entries should be assumed to be banned from PC challenges unless otherwise stated. Likewise, truthfully described AI-generated work should not be prevented from becoming FP, and those that do become FP should not be prejudiced in the POTY contest. -- King of ♥ ♦ ♣ ♠ 21:21, 20 December 2023 (UTC)
- Support AI images don't belong on Commons because they are fundamentally incompatible with our principles - mainly attribution and respect for copyright. However, until the rest of the community catches up with me on that point, I'm onboard with any and every effort to limit their presence. The Squirrel Conspiracy (talk) 23:05, 20 December 2023 (UTC)
- Brief note: how would you attribute millions (from thousands to billions) of images for one txt2img image? Are artists required to attribute their inspirations and prior relevant visual media experiences? The name 'copyright' already suggests that is about copying, not about learning from these publicly visible artworks; and art styles like 'Cubism' or subjects like 'Future cities' aren't copyrighted. The premise is unwarranted and wrong. --Prototyperspective (talk) 14:37, 22 December 2023 (UTC)
- If something truly shows the influence of millions of images, then it almost certainly does not have a copyright issue: it's just likely to be repetitive and unoriginal, unless it is somehow an interesting synthesis. But I think that is the least of the problems: most AI-generated content is unacceptable for the same reason most original drawings by non-notable people are unacceptable. - Jmabel ! talk 19:25, 22 December 2023 (UTC)
- Brief note: how would you attribute millions (from thousands to billions) of images for one txt2img image? Are artists required to attribute their inspirations and prior relevant visual media experiences? The name 'copyright' already suggests that is about copying, not about learning from these publicly visible artworks; and art styles like 'Cubism' or subjects like 'Future cities' aren't copyrighted. The premise is unwarranted and wrong. --Prototyperspective (talk) 14:37, 22 December 2023 (UTC)
- Support Having this be the default in advance will save much time and trouble. (If for some reason there would be a specific contest for AI images only, that would be an exception.) -- Infrogmation of New Orleans (talk) 00:00, 21 December 2023 (UTC)
Generally Support but with a few reservations. Photo contests should generally honor the efforts made by human contributions, not AI contributions. However, I may agree on some AI-specific contests like "Wiki Loves AI" (or something similarly-worded). In the case of existings Wiki Loves contests (WLM, WLE et cetera), I suggest organizers to have a separate category for AI images. Yes, it still depends on WL organizers; should they ban entry of AI images, that is fine. JWilz12345 (Talk|Contrib's.) 00:29, 21 December 2023 (UTC)Vote dropped in favor of alternate proposal below. JWilz12345 (Talk|Contrib's.) 09:38, 25 December 2023 (UTC)- Support --Adamant1 (talk) 02:21, 21 December 2023 (UTC)
- Support for photography-specific contests. –– There can be contests that are also about illustrations or artistic works in specific where such tools could and can be useful. However, a ban isn't really needed since that is already practiced and quite common sense; I don't know since when people on a website okay with showing unexpected porn and gore in unexpected categories and search results to all users suddenly turned so suppressive when it comes to a specific new tool of image creation+editing. Seems very inconsistent.--Prototyperspective (talk) 11:07, 21 December 2023 (UTC)
- Oppose Either a material is allowed on Commons or it is not. If/when necessary just adapt the specific rules of "those contests". Christian Ferrer (talk) 11:44, 21 December 2023 (UTC)
- Support per OP and Squirrel. — Huntster (t @ c) 22:44, 21 December 2023 (UTC)
- Support, except for AI contests :D --PantheraLeo1359531 😺 (talk) 19:05, 22 December 2023 (UTC)
- Support The last thing we want is allowing an
AI entityAI-generated image to enter the contest.Why is an AI creator reluctant to enter the contest is beyond me.George Ho (talk) 19:54, 22 December 2023 (UTC); corrected, 19:57, 22 December 2023 (UTC)
- Didn't realize it's about AI-generated images. I still oppose AI entities from entering contests. George Ho (talk) 19:57, 22 December 2023 (UTC)
- Support I'm anticipating that allowing AI generated works in could create a lot of clutter. Bremps... 00:01, 23 December 2023 (UTC)
- Oppose As long as AI is allowed on commons, it should be allowed in every contest. Alexpl (talk) 09:54, 23 December 2023 (UTC)
- @Alexpl: Every contest? So do you also believe that all contests must include drawings, paintings, audio files, etc.? - Jmabel ! talk 10:41, 23 December 2023 (UTC)
- Don´t ping me for stupid questions. Thank you. Alexpl (talk) 12:15, 23 December 2023 (UTC)
- I honestly don't think my question was stupid. If every contest should be open to AI-generated content, why shouldn't it be open to other acceptable forms of content? Seems very odd. - Jmabel ! talk 19:06, 23 December 2023 (UTC)
- Don´t ping me for stupid questions. Thank you. Alexpl (talk) 12:15, 23 December 2023 (UTC)
- @Alexpl: Every contest? So do you also believe that all contests must include drawings, paintings, audio files, etc.? - Jmabel ! talk 10:41, 23 December 2023 (UTC)
- Oppose In my opinion, banning every tool with the label "AI" is not helpful. The educational value of works from generative AI is very limited, of course, and there may be serious and difficult issues with copyright and possibly personal rights. AFAIK, AI upscaling does not and cannot work sufficiently and leads to artifacts and partially blurred and partially oversharpened images. However, smartphones might do aggressive AI-post-processing by default. Nevertheless I understand why these techniques are not welcome. But what about "simple" noise reduction? Even Photoshop introduced an AI tool for this task and there are other tools that work nicely if post-processing is not overdone. This is just the same as with any other kind of image processing software, whereas I don't know any affordable software that can do that without either serious loss of detail or with the trendy "AI" label. And this might be a problem, because AI has a very bad reputation on Commons, which is in sharp contrast to the huge hype almost everywhere else. --Robert Flogaus-Faust (talk) 21:45, 23 December 2023 (UTC)
- Please lets consider, first, what was my questions asked when i opened this topic. Please see the wikimedia commons home page at the right side of the page, the photo challenge box: what is displayed is a icon of a camera the words "take a picture...etc...etc". What i simply ask (i am relatively new of wikimedia commons so i am just trying to understand how it works here) the confirmation that AI Pics are excluded from the monthly photochallenges and the wiki loves...challenges. this is what it seems to me, indee. "Take a picture" is different than "post an AI picture in the contest". AI Pics have nothing to do with 1) with those kind of contests and 2) with "p h o t o g r a p h y". Photography is an art made by humans through their human eyes, first, (i would add and the human soul too). And please do not do this mistake of considering digital photos manipulated at the same level, the Post processing with photoshop has nothing to do with the AI concept. Photography is art. Painting is art. Sculpture is art. They are made by humans ,and hence, of course, they are not the same of the reality but they are made by humans. Even in the old style analogue photography we use (as i did in my darkroom in the past) to "mask" and "burn" the printed photos to hide details, that is an accepted technique to improving the picture light and detauis. So what is the problem? What I Asked here is simply to exclude those pictures from that kind of contest becasue they are not photographs. My subsequent questions is: what happens if an AI picture is voted and wins the contest? Will it be confirmed as winner?????? or some could intervene. I dont think they should join the contests. that is all.? Please do stay on the initial topic if you could..Saying that I AM NOT asking to exclude the AI pics from WIKIMEDIA: i am asking a different thing!..thanks. Oncewerecolours (talk) 08:40, 24 December 2023 (UTC)
- You are allowed "Post processing with photoshop" in those challenges? I had no idea. So have photos ever been excluded from the competition for having too much "work" done on them? If not - AI should be fine as well (The more religious aspects left aside) Alexpl (talk) 10:09, 24 December 2023 (UTC)
- Well, again.... it is a different thing. Ai pics aren't photograps...no camera involve,no lenses...no human eye. See the definition of a photograph. And see the photo challenge info page guidelines. . . Oncewerecolours (talk) 10:32, 24 December 2023 (UTC)
- I am sorry. I may be wrong here. And my issue is not with entirely or partially AI-generated pics, which are very problematic. I very rarely participate in photo challenges and I have never used Photoshop. In most cases, I just crop my photos with GIMP and don't do anything else. I know that there are nature photography competitions elsewhere, where the authors must submit their original RAW files for evaluation in addition to their JPEG version to make sure that nothing was inappropriately manipulated. That is alright, but I could never participate there because my cameras are set to create JPEG images only. I am a frequent participant on Commons:Quality images candidates/candidate list, though. There you can find requests to remove dust spots, CAs, decrease noise, adjust lighting, and even (rarely) retouch photographs to remove disturbing elements and improve the composition. I would not ever do the latter on Commons, because my images are supposed to show what I photographed, not some ideal work of art. I am not sure about the relation of quality images to photo contests, but where the kind of edits described above is allowed or even requested, banning AI tools does not make much sense IMO. That said, overprocessed images and upscaled images (which includes images with artifacts by AI upscaling or by other means) are not welcome there and such images get declined. And images created by generative AI engines are banned anyway because the photographer must have an account on Commons. --Robert Flogaus-Faust (talk) 11:07, 24 December 2023 (UTC)
- The human operator chooses the subject, perspective etc. in conventional photography, as well as in AI* produced pictures. *(depending on the AI program used) So voting "oppose" is still ok, I guess. Alexpl (talk) 10:47, 24 December 2023 (UTC)
- So, you are saying that 1)Ai images are the same as photos taken by a human and 2) Ai pics should be allowed in the wiki love monuments, earth, science etc...and monthly challenges
- , in the same contests of the photos taken by users? Just to understand.. . Oncewerecolours (talk) 11:02, 24 December 2023 (UTC)
- They are not the same: The photo guy has potentially a ton of equipment and has to move around to find motives, while the AI guy doesn´t need a camera and sits on his butt all the time. The rest of the work for both is pressing buttons and moving a mouse. But if you are unable to specify the rules of your competition, esp. what is allowed in post production, you would have to accept those AI works as well. Merry Christmas. Alexpl (talk) 14:54, 24 December 2023 (UTC)
- Well, again.... it is a different thing. Ai pics aren't photograps...no camera involve,no lenses...no human eye. See the definition of a photograph. And see the photo challenge info page guidelines. . . Oncewerecolours (talk) 10:32, 24 December 2023 (UTC)
- You are allowed "Post processing with photoshop" in those challenges? I had no idea. So have photos ever been excluded from the competition for having too much "work" done on them? If not - AI should be fine as well (The more religious aspects left aside) Alexpl (talk) 10:09, 24 December 2023 (UTC)
- Please lets consider, first, what was my questions asked when i opened this topic. Please see the wikimedia commons home page at the right side of the page, the photo challenge box: what is displayed is a icon of a camera the words "take a picture...etc...etc". What i simply ask (i am relatively new of wikimedia commons so i am just trying to understand how it works here) the confirmation that AI Pics are excluded from the monthly photochallenges and the wiki loves...challenges. this is what it seems to me, indee. "Take a picture" is different than "post an AI picture in the contest". AI Pics have nothing to do with 1) with those kind of contests and 2) with "p h o t o g r a p h y". Photography is an art made by humans through their human eyes, first, (i would add and the human soul too). And please do not do this mistake of considering digital photos manipulated at the same level, the Post processing with photoshop has nothing to do with the AI concept. Photography is art. Painting is art. Sculpture is art. They are made by humans ,and hence, of course, they are not the same of the reality but they are made by humans. Even in the old style analogue photography we use (as i did in my darkroom in the past) to "mask" and "burn" the printed photos to hide details, that is an accepted technique to improving the picture light and detauis. So what is the problem? What I Asked here is simply to exclude those pictures from that kind of contest becasue they are not photographs. My subsequent questions is: what happens if an AI picture is voted and wins the contest? Will it be confirmed as winner?????? or some could intervene. I dont think they should join the contests. that is all.? Please do stay on the initial topic if you could..Saying that I AM NOT asking to exclude the AI pics from WIKIMEDIA: i am asking a different thing!..thanks. Oncewerecolours (talk) 08:40, 24 December 2023 (UTC)
- IMHO AI-images cannot enter photography contests, since they are not photographs (according to en.wikipedia "an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor, such as a CCD or a CMOS chip"). On the contrary, if AI image contests took place, it would not be practical banning AI-images from them. So I guess I agree with Jmabel's "this is up to the people who run the contest". I do not have a clear opinion on "AI-corrected-photographs" entering photography contests. Strakhov (talk) 12:47, 24 December 2023 (UTC)
- Yes, there should be 2 separate contests. I agre that "this is up to the people who run the contest" But who runs the Monthly challenge in the end? Oncewerecolours (talk) 14:05, 24 December 2023 (UTC)
- To be honest, I don't know. I do not remember participating in a Commons contest so far. I took a look and ...monthly themes are apparently proposed here. I guess regulations & stuff could be included there for each contest. Anyway, current heavy opposition to AI in Wikimedia Commons community would surely prevent AI-stuff from winning these contests, I wouldn't be much worried.... And... how can we identify AI-images in Wikimedia Commons? Is counting fingers the only method? For example, is this one created with AI or just too much post-processed? Strakhov (talk) 16:16, 24 December 2023 (UTC)
- Support seems sensible to me Herby talk thyme 13:27, 24 December 2023 (UTC)
- Comment The proposer changed the title of this section after most people had already commented. I have changed it back. Jeff G., please do not change proposals once they are in progress; you're then misrepresenting the positions people had already taken. You are free to create a new proposal under this one if you'd like. The Squirrel Conspiracy (talk) 06:48, 25 December 2023 (UTC)
- @The Squirrel Conspiracy: Sorry, I have done so below. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:54, 25 December 2023 (UTC)
- Support. -- Geagea (talk) 08:58, 25 December 2023 (UTC)
- Support. --Túrelio (talk) 21:53, 27 December 2023 (UTC)
- Oppose, contests should set their own rules. It's only natural that computer-generated images shouldn't be used in photography contests, but writing a prompt, selecting a good image, and at times even editing the end result to make it better is an art in itself. AI-generated images are a new frontier in public domain works and we should encourage good and educationally useful images to be used using these tools, but in whatever contest they would give an unfair advantage they should be excluded on a case-by-case and contest-by-contest basis. A blanket rule to exclude them would cause more problems than solve them, especially since every contest can write its own rules. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 21:24, 29 December 2023 (UTC)
- Support AI images are not photographs and should not run in a competition about photographs.Paradise Chronicle (talk) 22:18, 30 December 2023 (UTC)
Block AI images from being entered into photography contests, and therefore from winning[edit]
- Support as proposer, with apologies to The Squirrel Conspiracy. This is only about photography contests. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:51, 25 December 2023 (UTC)
- Support As per above. The Squirrel Conspiracy (talk) 06:59, 25 December 2023 (UTC)
- Support As per above. -- Geagea (talk) 08:59, 25 December 2023 (UTC)
- Support per above. My vote above has been dropped in favor of this new proposal. JWilz12345 (Talk|Contrib's.) 09:38, 25 December 2023 (UTC)
OpposeSince AI works are not considered photography anyway, no action has to be taken. Alexpl (talk) 13:56, 25 December 2023 (UTC)- @Alexpl: Since people are likely to upload AI works and submit them to photography contests, we want to prevent that, or at least keep them from winning unfairly. By opposing, you want to let those people do that. Why? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:09, 25 December 2023 (UTC)
- "winning unfairly" - can´t comprehend, since I don´t know the amount of competitions affected or the actual rules for them. Concerning AI: Do you fear people A) upload AI-work and categorize it as such and then enter it to a photo-contest or B), they upload AI-work, but claim it to be conventional photos and enter those to contests? "A" isn´t really a problem because the image is already labeled as AI-work and can be removed from the competition. And "B" - well, you most likely won´t be able to tell* that it is an AI-work anyway if done properly. If it´s "B", I change my vote for Support, but since concealed AI-work may be very difficult to identify, it doesn´t really matter. *(made harder by all the post-processing apparently allowed in photocompetitions) Alexpl (talk) 17:23, 25 December 2023 (UTC)
- Alexpl: I seek to disqualify both A and B. Postprocessed photos are still photos, but with defects removed or ameliorated in some way. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 17:33, 25 December 2023 (UTC)
- There shouldn´t be a necessity to disqualify "A" since the uploader themself labeled the image as AI-work and therefor "not a photograph". You just need "B" and write into the rules "If a photograph is identified as an AI work, it is removed from a running competion, or, if the competion is already over, it loses the title "best image of a bug on a leaf 2024"" or whatever it is, you guys excel. Alexpl (talk) 18:07, 25 December 2023 (UTC)
- @Alexpl I believe that it can happen that AI images are posted in Photo contests, disguised as "brilliant photographs". How to identify them? first clue is the lack of flaws, the perfection. The final (last but not least though) test is the lack of EXIX data. That is a cross-test that most of the times proves to be veryyy useful. My opinion, if anyone has different view please share:) Oncewerecolours (talk) 08:06, 27 December 2023 (UTC)
- "exif" Oncewerecolours (talk) 08:06, 27 December 2023 (UTC)
- Alexpl: I seek to disqualify both A and B. Postprocessed photos are still photos, but with defects removed or ameliorated in some way. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 17:33, 25 December 2023 (UTC)
- "winning unfairly" - can´t comprehend, since I don´t know the amount of competitions affected or the actual rules for them. Concerning AI: Do you fear people A) upload AI-work and categorize it as such and then enter it to a photo-contest or B), they upload AI-work, but claim it to be conventional photos and enter those to contests? "A" isn´t really a problem because the image is already labeled as AI-work and can be removed from the competition. And "B" - well, you most likely won´t be able to tell* that it is an AI-work anyway if done properly. If it´s "B", I change my vote for Support, but since concealed AI-work may be very difficult to identify, it doesn´t really matter. *(made harder by all the post-processing apparently allowed in photocompetitions) Alexpl (talk) 17:23, 25 December 2023 (UTC)
- @Alexpl: Since people are likely to upload AI works and submit them to photography contests, we want to prevent that, or at least keep them from winning unfairly. By opposing, you want to let those people do that. Why? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:09, 25 December 2023 (UTC)
- Support as I remarked above, of course a photography contest is open only to photographs. - Jmabel ! talk 20:26, 25 December 2023 (UTC)
- Support I support this more specific proposal in addition to the broader one above. Gestumblindi (talk) 12:14, 27 December 2023 (UTC)
- Support That's also what I thought the discussion above does or may propose. Banning AI images explicitly in such contests & campaigns would be good since otherwise users could argue they didn't know generative photography wasn't allowed and didn't know about the respective categories or that they should have put this in the file description. A good example case may be images in this cat where it was somehow unclear whether or not they are photographs (it only had a Flickr tag 'midjourney') and before I intervened where located in a photography cat. --Prototyperspective (talk) 16:05, 27 December 2023 (UTC)
- Support. This should go without saying, but just in case there was any remaining doubt - "photography" excludes all forms of computer-generated images, "AI" or otherwise. Yes, I'm aware there are some grey areas when it comes to image retouching; I also think that photographers should have the common sense to know what is and isn't appropriate, and to disclose anything borderline when submitting photos to a contest. Omphalographer (talk) 01:45, 30 December 2023 (UTC)
- Support. Definitely, computer-generated images shouldn't be included in photography contest.--Vulcan❯❯❯Sphere! 07:15, 5 January 2024 (UTC)
- Support --Adamant1 (talk) 11:27, 9 January 2024 (UTC)
- Support no non-human created photographs in photography contests, or on commons for that matter Gnangarra 12:18, 9 January 2024 (UTC)
- I'm sorry, but that just strikes me as wrong. Category:Monkey selfie leaps to mind; so do most photographs from outer space except the relatively small number taken deliberately by an individual astronaut/cosmonaut. Similarly, there can by appropriate images take by security cameras. Conversely, AI rarely takes "photographs", it creates images by other means; I'd have no problem at all with something where an AI-driven robot was operating an actual camera, as long as the images were in scope, did not create privacy issues, etc. - 19:34, 9 January 2024 (UTC)
Allow the organizers of the contest to decide whether or not they wish to allow AI images[edit]
- Support Why Commons needs a specific rule to deal with this is beyond me.--Trade (talk) 13:50, 27 December 2023 (UTC)
- Oppose as ignoring reality. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 15:16, 27 December 2023 (UTC)
- Support In fact, I could imagine an AI-specific contest. - Jmabel ! talk 18:59, 27 December 2023 (UTC)
- Support This is the best idea. --Robert Flogaus-Faust (talk) 21:42, 27 December 2023 (UTC)
- Support Infrogmation of New Orleans (talk) 21:48, 27 December 2023 (UTC)
- Support, obviously, isn't this already how contests work? --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 21:25, 29 December 2023 (UTC)
- Comment In my opinion there should be a separate competition for AI images.Paradise Chronicle (talk) 22:22, 30 December 2023 (UTC)
- I can think of few things more insulting and repulsive. Commons' volunteers spend a massive amount of effort making sure that the content here respects the copyright of its creators, and these AI tools are built through open, flagrant disregard of copyright. The Squirrel Conspiracy (talk) 02:44, 31 December 2023 (UTC)
- Hopefully at some point we can create a list of models that are only trained freely licensed images and allow for artwork created by them to a greater degree then we do with AI artwork at this point. I feel like that's really the only way forward here without disregarding copyright in the process though. --Adamant1 (talk) 07:36, 5 January 2024 (UTC)
- Support. I support AI-specific competitions and this is a good compromise.--Vulcan❯❯❯Sphere! 07:09, 5 January 2024 (UTC)
- Oppose The proposal to ban AI artwork specifically from photography contests is better IMO. There's no reason we can't just exclude AI artwork from photography contests while allowing it others. This would essentially take away our ability to moderate how AI artwork is used contests at all though, which I don't think is in the projects interests. --Adamant1 (talk) 11:32, 9 January 2024 (UTC)
- Oppose event organsiors must comply with Commons requirements for all images uploaded to Commons. Gnangarra 12:16, 9 January 2024 (UTC)
Restrict closing contentious deletion discussions to uninvolved admins[edit]
RFCs can only be closed by uninvolved editors, but deletion discussions can be closed by any admin, even if they are heavily involved in the discussion. I propose changing "administrator" to "uninvolved administrator" in the first sentence of Commons:Deletion requests#Closing discussions. I propose adding the following sentence to Commons:Deletion requests#Closing discussions: "In cases of contentious requests, discussions should be closed by an uninvolved administrator." Nosferattus (talk) 01:55, 29 December 2023 (UTC)
- Support as proposer. Closures by involved admins feel like an abuse of power, or at the very least, a conflict of interest. There is no reason a deletion discussion can't wait for an uninvolved admin, which will always feel more fair to everyone involved. Nosferattus (talk) 01:58, 29 December 2023 (UTC)
- Comment Can you point to specific incidents that caused you to propose this, or is this a solution in search of a problem? The Squirrel Conspiracy (talk) 02:01, 29 December 2023 (UTC)
- Wasn't there a big fuzz with Yann and Eugene about this? Trade (talk) 02:16, 29 December 2023 (UTC)
- @The Squirrel Conspiracy: Here's a recent example. I can supply more if needed. Nosferattus (talk) 02:26, 29 December 2023 (UTC)
- @Nosferattus Maybe it's just me, but your example doesn't make sense to me. The discussion was closed by Jim and that seems also their only edit in the discussion. I also do not believe that I experienced that involved admins would close a discussion, maybe I did, but then they hid it really good.Paradise Chronicle (talk) 13:08, 31 December 2023 (UTC)
- @Paradise Chronicle: Please look at the 2nd discussion on that page, not the 1st. Nosferattus (talk) 15:56, 31 December 2023 (UTC)
- Thanks, got it. Didn't know a close of a discussion can be shown at the bottom as well as at the top. Paradise Chronicle (talk) 16:13, 31 December 2023 (UTC)
- @Paradise Chronicle: Please look at the 2nd discussion on that page, not the 1st. Nosferattus (talk) 15:56, 31 December 2023 (UTC)
- @Nosferattus Maybe it's just me, but your example doesn't make sense to me. The discussion was closed by Jim and that seems also their only edit in the discussion. I also do not believe that I experienced that involved admins would close a discussion, maybe I did, but then they hid it really good.Paradise Chronicle (talk) 13:08, 31 December 2023 (UTC)
- @The Squirrel Conspiracy: Here's a recent example. I can supply more if needed. Nosferattus (talk) 02:26, 29 December 2023 (UTC)
- Wasn't there a big fuzz with Yann and Eugene about this? Trade (talk) 02:16, 29 December 2023 (UTC)
- Comment My first thought is that this seems a bit overly broad, especially given the significant problem we have with deletion request listing backlogs. I've been an admin on Commons for more than 19 years. If I started a deletion request, or commented on it, I *generally* let some other admin take care of closing it. However there have been occasional exceptions - mostly when trying to clean up months old backlogs, with no new discussion for months, and no counterarguments have been offered to what seems a clear case per Commons/copyright guidelines - I might feel it is a "SNOWBALL" that since I'm there I might as well take care of cleaning it up. I try to avoid conflicts of interest, and even appearances of conflicts. Does having commented on something inherently create a conflict of interest? (Examples: 1) a deletion request is made by an anon with vague reason - I comment that 'per (specific Commons rule) this should be deleted'. Months later I notice that this listing was never closed, no one ever objected to deletion. Is going ahead and closing it per the rule I mentioned earlier a conflict of interest? 2)Someone listed an image as out of scope. I commented, whether agreeing or disagreeing. Then someone else points out that the file is a copyright violation, which nominator and I had not noticed. Should I be prohibited from speedy deleting the copyright violation because I earlier commented on deletion for different grounds?) I'm certainly willing to obey whatever the decision is; I just suggest this could be made a bit narrower, perhaps with specific exceptions? Otherwise I fear this could have an unintended side effect of making our already horribly backed up deletion request situation even worse. -- Infrogmation of New Orleans (talk) 03:09, 29 December 2023 (UTC)
- Or we could just make it so the rule only applies to DR's that have lasted for less than a month Trade (talk) 03:23, 29 December 2023 (UTC)
- @Nosferattus: given your example, I take it that you consider an admin involved if they have in any way participated in the DR? And would you apply this even when the DR has proved uncontroversial?
- Also: I certainly close clearly uncontroversial CfD's even if I started them. Are you saying I shouldn't have closed Commons:Categories for discussion/2023/12/Category:Taken with SMC PENTAX DA 14 mm f/2.8 ED IF? Because, frankly, I had been very close to making the changes in question without even starting a CfD, but I wanted to make sure I wasn't missing something. What about Commons:Categories for discussion/2023/12/Category:Spielplatz Küsnacht See, where the issue was simply to identify the subject of the category so it could be fixed, or Commons:Categories for discussion/2023/12/Category:Photos of Badagry Social Media Awards (BSMA) (open for 20 days, and no comments for most of that time so I left it open, and when someone finally weighed in it was to agree with me)? I could stop doing this if you object, but please say so explicitly. - Jmabel ! talk 05:23, 29 December 2023 (UTC)
- @Infrogmation and Jmabel: I've changed the proposal based on your feedback. Nosferattus (talk) 06:03, 29 December 2023 (UTC)
- Or we could just make it so the rule only applies to DR's that have lasted for less than a month Trade (talk) 03:23, 29 December 2023 (UTC)
- Oppose This would be a good rule if we would have enough admins but with the current amount of active admins this could increase the backlog dramatically. We maybe could implement the rule that deleting admin and the admin who declines a undeletion request can not be the same. As well as for a reopened deletion request of a not deleted file were a decline of the new request has to be done by an other admin. Both cases of course need exceptions for vandalism or the abuse of requests.
- GPSLeo (talk) 12:39, 29 December 2023 (UTC)
- Support with reservations: at the same time it's a problem when an admin doesn't participate in the discussion and doesn't directly address arguments or making rationales for deletion. This is especially problematic for discussions where there are only few votes. For example the nomination and one Keep vote (example example) that directly addresses or refutes the deletion nomination rationale as well as discussions where there is no clear consensus but a ~stalemate (if not a Keep) when votes by headcount are concerned (example). I've seen admins close such discussion (see examples) abruptly without prior engagement and so on. So I think it would be best that for cases of these two types closing admins are even encouraged to (have) participate(d) in the discussion but only shortly before closing it / at a late stage. On Wikipedia there is the policy WP:NODEMOCRACY that reasons and policies are more important than vote headcounts, especially for by headcount unclear cases but it seems like here both voting by headcount and admin authority are more important. It wouldn't increase the backlog but only distribute the discussion closing differently. Bots, scripts & AI software could reduce the backlog albeit I don't know of a chart that shows the WMC backlogs size and it wouldn't significantly increase due to this policy change.
- Prototyperspective (talk) 13:16, 29 December 2023 (UTC)
- Oppose Proposal is currently overly broad and would be detrimental in shortening our backlog. I don't close DRs that I have a heavy amount of involvement in except for when I withdraw ones that I had started. If I leave an opinion on whether a file should be kept or deleted, I wait for another admin to close. Sometimes though, I like to ask questions or leave comments seeking information that helps me decide on borderline cases. I'd be more supportive if this proposal were more limited. I can also agree with GPSLeo that deleting admin and admin who declines UDRs of the file should not be the same one. Abzeronow (talk) 16:54, 29 December 2023 (UTC)
- @Abzeronow: Do you have any suggestions or guidance for how a more limited proposal could be worded? How would you like it to be limited? Nosferattus (talk) 17:34, 29 December 2023 (UTC)
- Support This should be natural. Since it itsn't to too many Admins, it needs a rule. --Mirer (talk) 17:48, 29 December 2023 (UTC)
- Comment There are times when posters to UDR present new arguments or new evidence. If that is enough to convince the Admin who closed the DR and deleted the file, why shouldn't they be allowed to undelete? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:03, 29 December 2023 (UTC)
- Oppose per Abzeronow. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:05, 29 December 2023 (UTC)
- Although I am myself in support of not closing discussions/DRs where I am involved, except as Abzeronow says, one withdrew or so, I believe our current ratio of active admins should be considered. We does not have plenty of admins like English Wikipedia has As such, I tend to Oppose. ─ The Aafī (talk) 19:18, 29 December 2023 (UTC)
- Oppose Discussions are closed according to Commons policies, not according to votes. Yann (talk) 19:39, 29 December 2023 (UTC)
- @Yann: Although I appreciate your work on deletion and your opinion here, this reply comes across as completely dismissive. No one has said anything about votes. Of course discussions are closed according to Commons policies. Do you believe that admins have a monopoly on the understanding of Commons policies? Do you understand why closing a contentious discussion you are involved in could be problematic and discourage other people from participating in the process? Nosferattus (talk) 16:29, 30 December 2023 (UTC)
- Contrary to picture contests, opinions in DRs are not votes. Participants, including non admins, can explain how a particular case should be resolved compared to Commons policies, but it is not uncommon that a DR is closed not following the majority of participants. Also, seeing the small number of admins really active, it is not possible that admins exclude themselves from closing if they give their opinions. Yann (talk) 09:57, 31 December 2023 (UTC)
- Oppose. Involved editors should not close discussions, but I'm leery of making that an absolute rule. There are times when it can be reasonable. I also do not want to encourage complaints about reasonable closures just because the closer had some involvement. Glrx (talk) 01:39, 30 December 2023 (UTC)
- Oppose - This is presented without evidence of a problem (or even articulation of one) and without articulation of thought or analysis related to potential downsides, indeed as referenced above. Additionally, reliance on--here, increasingly use of--adjectives in governing documents is terrible practise in real life and on-site. All this would do is shift post-closure disagreement from "should [Admim] have closed this" to the even more complicated "was [Admin] 'involved'" and "is the discussion 'contentious'". Alternatively stated, to the extent this proposal seeks to limit biased closures, all it would do is provide more avenues to argue such closures are within the range of discretion for interpretation of those terms. If an admin is making inappropriate closures, raise the issue at a notice board. If a prospective admin has not demonstrated an ability to use discretion and abstain when too close to an issue, oppose their rfa. Ill-considered policy changes are not the correct approach. Эlcobbola talk 17:03, 30 December 2023 (UTC)
- "Involved" means they participated in the discussion. "Contentious" means different opinions were presented. These criteria are easy to objectively determine. I added "contentious" because other editors wanted the criteria narrowed. Nosferattus (talk) 18:16, 30 December 2023 (UTC)
Oppose I'd be for this if there were more people who could close discussions. There just isn't enough who can at this point to justify limiting the number even more by approving this though. Although it would be a good idea if or when there's enough users who can close deletion discussions to make up for the deficit.--Adamant1 (talk) 11:31, 31 December 2023 (UTC)
- Support As an admin, I have always followed this as my personal policy. It simply wouldn't feel right to me to close a discussion where I was involved substantially in the discussion, giving my own opinion. When a deletion request didn't have a lot of discussion, but I have a clear opinion on the matter, I often decide to give just my opinion and leave the discussion for the next admin to decide, consequently. I agree with Mirer and think "it should be natural". However, I have encountered admins who do this, even close their own proposals deciding that a discussion went into favor of their opinion when this isn't perfectly clear. So, making this an official policy would be a good idea IMHO. I would still allow closure of discussions where the admin's involvement was only technical. Gestumblindi (talk) 15:06, 31 December 2023 (UTC)
- Support It's a fair proposal and it would avoid discussions in the future. I actually thought this was already normal as I have never experienced an involved admin closing a discussion.Paradise Chronicle (talk) 17:59, 31 December 2023 (UTC)
- How do you define involved? I often had the case that I asked a question to the uploader and as I got no response I deleted the file. GPSLeo (talk) 18:51, 31 December 2023 (UTC)
- Of course I'd also see admins who become involved in a technical, formal way such as correcting mistakes in formatting or spelling, or ensuring that the uploader had enough time to defend their file should be allowed to close a DR. But in my opinion no admin should close a discussion in which they have voted in or presented an argument in support or oppose. Paradise Chronicle (talk) 19:30, 31 December 2023 (UTC)
- How do you define involved? I often had the case that I asked a question to the uploader and as I got no response I deleted the file. GPSLeo (talk) 18:51, 31 December 2023 (UTC)
- Support There's zero reason admins should be closing DRs they have either voted or heavily commented in. No one expects an administrator not to close a DR where they have made a benign, meaningless comment. But there's zero reason they should be able to close one if they have participated beyond that. Especially in cases where the participation shows they are invested in a specific outcome. --Adamant1 (talk) 11:36, 9 January 2024 (UTC)
- Oppose as per Yann and Эlcobbola. DRs are not a popularity contest. 1/ the DRs should be closed following our policies not to follow a majority of votes. 2/it is sufficiently hard to find administrators to look at some complicated DRs, and if in addition we prevent those "involved" administrators to close DRs, it would becomes harder to find "uninvolved" administrators who are able to digest long and long discussions containing 2 ,3 or more point of views. 3/if either some closing may be contencious, there is still various places where to raise potential issues (Village Pump, Village Pump/copyright, Adm Noticeboard, Undeletion Requests, ect...). 4/ To restreint freedom of movement for the (not enough) administrators who are trying to do well the job, is not a good thing IMO. Christian Ferrer (talk) 11:05, 10 January 2024 (UTC)
Help prepare Commons:AI-generated media for proposal as guidelines[edit]
Needless to say, AI-generated media has become one of the most contentious topics on Commons and a subject of much debate and discussion. Over the past year, numerous editors have attempted to create guidance on how Commons should handle AI-generated media. This page has already been linked to from dozens of deletion discussions and user talk pages (with some people prematurely referring to it as a policy).[1] After an initial flurry of edits and revisions, the page has now been stable for at least 4 months. Please take a look at it, and if you notice any things that need to be changed, open a discussion on the talk page. Thank you! Nosferattus (talk) 02:50, 29 December 2023 (UTC)
- Thibaut120094 explicitly calls it an essay Trade (talk) 03:25, 29 December 2023 (UTC)
- Yes, it is currently an essay: "for the time being marking as an essay, also as a remedy for the missing categorization. If the page reaches the status of a proposed policy, change accordingly...". Considering how contentious AI deletion discussions tend to be and how rapidly the number of AI images on Commons is increasing, I think it is important that it at least be promoted to a guideline, if not a policy. Nosferattus (talk) 05:57, 29 December 2023 (UTC)
- People have decried that "AI was gonna take over Commons" since the essay was made last december Trade (talk) 15:36, 29 December 2023 (UTC)
- As for the claim of AI deletion discussions being contentious, if you look at the DR's and discussions from oldest to newest you'll notice that they used to be perfectly civil until a few weeks ago. It's very much a recent thing --Trade (talk) 22:01, 29 December 2023 (UTC)
- @Jmabel: 's summary of the Village Pump discussion would be a good starting place for a policy. The current essay is to all over the place and not based on current consensus though. Plus a lot of it just seems cursory at best, if not totally unnecessary. --Adamant1 (talk) 11:26, 31 December 2023 (UTC)
- @Adamant1: Please let me know which parts of the page you feel are not based on consensus or are unnecessary (or just boldly edit the page). Thanks! Nosferattus (talk) 16:01, 31 December 2023 (UTC)
- Per Adamant1's remark, I've taken the liberty of copying my VP comments to Commons talk:AI-generated media#Possible alternative/additional text for this page. - Jmabel ! talk 20:03, 31 December 2023 (UTC)
- @Jmabel: 's summary of the Village Pump discussion would be a good starting place for a policy. The current essay is to all over the place and not based on current consensus though. Plus a lot of it just seems cursory at best, if not totally unnecessary. --Adamant1 (talk) 11:26, 31 December 2023 (UTC)
- As for the claim of AI deletion discussions being contentious, if you look at the DR's and discussions from oldest to newest you'll notice that they used to be perfectly civil until a few weeks ago. It's very much a recent thing --Trade (talk) 22:01, 29 December 2023 (UTC)
- People have decried that "AI was gonna take over Commons" since the essay was made last december Trade (talk) 15:36, 29 December 2023 (UTC)
- Yes, it is currently an essay: "for the time being marking as an essay, also as a remedy for the missing categorization. If the page reaches the status of a proposed policy, change accordingly...". Considering how contentious AI deletion discussions tend to be and how rapidly the number of AI images on Commons is increasing, I think it is important that it at least be promoted to a guideline, if not a policy. Nosferattus (talk) 05:57, 29 December 2023 (UTC)
- Question There might be a problem with the templates and categorization requirements. E.g., Category:Photos modified by AI is a subcategory of Category:Retouched pictures. The template {{Retouched picture}} adds Category:Retouched pictures to the image, which would result in some overcategorization IMO. Is this o.k. anyway because Category:Retouched pictures is a hidden category? --Robert Flogaus-Faust (talk) 16:34, 31 December 2023 (UTC)
- It sounds like we may need a template specifically for AI modification (apart from upscaling). Nosferattus (talk) 22:59, 5 January 2024 (UTC)
Allow image-reviewers to delete files[edit]
In the discussion above, many editors complained that there aren't enough admins to deal with the file deletion backlog. To address this problem, I propose that we enable the delete
right for the image-reviewer user group and allow image-reviewers to close deletion discussions. This would add 323 more people who could help address the deletion backlog. Nosferattus (talk) 18:34, 30 December 2023 (UTC)
- Oppose Active image reviers with free capacity can apply as admin. --Krd 19:00, 30 December 2023 (UTC)
- Oppose - Image reviewer is an very low standard and, in actual practise, primarily entails mere comparison of an uploaded file's purported license to licensing information at the source. There have, for example, been instances of image reviewers credulously "passing" obviously laundered licenses and/or failing to consider appropriately the multiple copyrights that can exist in derivative works. Deletion is a sensitive enough function that a greater degree of community approval should be present to assess competence in those and other issues (the LR flag is granted by a single admin, which is not adequate evaluation). Giving more users the delete button, especially based on an inadequate criterion like the LR flag, is overly simplistic and fails to understand the root cause of the issue; what is need is not more deleting users, but more participation. The majority of backlogged DRs relate to complex issues that have had little to no discussion. More participation by all users there--rather than, say, here--would allow existing admins to assess consensus and act. How many of those 323 reviewers have opined at, say, requests in Commons:Deletion requests/2023/09? Almost none? Эlcobbola talk 19:18, 30 December 2023 (UTC)
- Oppose per above. The Squirrel Conspiracy (talk) 02:38, 31 December 2023 (UTC)
- Oppose per elcobbola. Glrx (talk) 02:46, 31 December 2023 (UTC)
- Eventual Oppose, deletion closures are best handled by exceptional users who are prudent in decision-making (the admins). We have a much more severe backlog at COM:Categories for discussion, and I think autopatrolled users should have the right to delete categories if the CfD results to deletion of a certain category to enable category move. (Must I open a proposal on this as a new section here?) JWilz12345 (Talk|Contrib's.) 19:16, 10 January 2024 (UTC)
I withdraw the proposal. Anyone have any other ideas for addressing the problem? Nosferattus (talk) 04:27, 31 December 2023 (UTC)
- Support - As there is a separate user group that handles copyright and reviews uploads, which is only included in the admin toolset, the community trusts them with reviewer access. Therefore, I believe that the
delete
access should also be included in the reviewer group. Thank you.--C1K98V (💬 ✒️ 📂) 05:48, 31 December 2023 (UTC) - Comment Is there any reason delete access can't be granted on a case-by-case basis like is now being done for people who want to overwrite files? --Adamant1 (talk) 11:14, 31 December 2023 (UTC)
- Delete access is already granted on case-by-case basis via Commons:Administrators/Requests. It's not the project goal to make procedures and policy set as complicated as possible. Krd 11:29, 31 December 2023 (UTC)
- The proposal is already withdrawn, so I think there is no need to formally oppose it now, but just adding my two cents: Deciding deletion discussions and deleting files is a central part of admin rights and requires the kind of experience on Commons that we usually see as grounds for granting these rights - so, if someone thinks they're experienced enough to decide deletion discussions, they should simply start a request for adminship, as Krd says. Also, I think there is currently no technical way to separate deletion rights from the undeletion right, with which comes the ability to view "deleted" files (which actually aren't deleted technically, but visible only to admins), and this group shouldn't be made too large for legal reasons (it's already questionable to not actually "hard-delete" images which were deleted e.g. for copyright reasons, and only somewhat justifiable by restricting access to a small group, that is, admins). Gestumblindi (talk) 15:00, 31 December 2023 (UTC)
- @Gestumblindi: "only somewhat justifiable": it's entirely justifiable on that basis. Remember, the legal aspects of "fair use" easily let us host content on that basis for a highly restricted audience. Quite likely, as an educational site we could host most files (and certainly all that are used legitimately in any of the Wikipedias) publicly on that basis if that were our policy, because our site is educational. The exclusion of "fair use" files from Commons is largely a policy issue, not a legal issue. - Jmabel ! talk 19:52, 31 December 2023 (UTC)
- Thanks, Jmabel, I tend to looking at legal aspects from my European perspective where we don't have the US fair-use provisions (therefore, for example, German-language Wikipedia doesn't accept "fair use" either), but of course you're right that, if you consider fair use, wider access to "deleted" (flagged as deleted) files shouldn't be that much of an issue copyright-wise (and as Bjh21 points out, it seems that it would be possible to grant deletion without undeletion rights, though this would create new issues, will answer to that below). There are, of course, still images that are deleted for other reasons than copyright, such as personality rights, and in these cases, fair use doesn't help us. Wide access to files deleted because of privacy concerns, for example, could be an issue. Gestumblindi (talk) 09:16, 2 January 2024 (UTC)
- @Gestumblindi: "only somewhat justifiable": it's entirely justifiable on that basis. Remember, the legal aspects of "fair use" easily let us host content on that basis for a highly restricted audience. Quite likely, as an educational site we could host most files (and certainly all that are used legitimately in any of the Wikipedias) publicly on that basis if that were our policy, because our site is educational. The exclusion of "fair use" files from Commons is largely a policy issue, not a legal issue. - Jmabel ! talk 19:52, 31 December 2023 (UTC)
- Point of information: mw:Manual:User rights doesn't say that
delete
depends onundelete
(or any other right), so I think it should be technically possible to grant justdelete
to licence reviewers. And meta:Limits to configuration changes notably lists only "Allow non-admins to view deleted content" as a prohibited change, and not allowing non-admins to delete pages. --bjh21 (talk) 18:49, 31 December 2023 (UTC)- @Bjh21: Thank you, that's good to know. However, I think that granting only the "delete" right without "undelete" (and thus without the ability to view deleted content) would create new issues, too. People with that delete-only right couldn't review their own deletions (except if it would be possible and allowed to let them only view content they deleted themselves?)... Gestumblindi (talk) 09:19, 2 January 2024 (UTC)
- Indeed, I was only commenting on your "no techincal way" claim. I agree that in general it's a bad idea to give someone the ability to do something they can't undo. --bjh21 (talk) 15:06, 2 January 2024 (UTC)
- Why couldn't they just contact an admin and have them undelete the file in the rare cases where they would need to? That would still be less work then the current system. Although it seems like undeleting files would be a non-issue if they were only closing DRs with clear outcomes to begin with. --Adamant1 (talk) 15:23, 2 January 2024 (UTC)
- Indeed, I was only commenting on your "no techincal way" claim. I agree that in general it's a bad idea to give someone the ability to do something they can't undo. --bjh21 (talk) 15:06, 2 January 2024 (UTC)
- @Bjh21: Thank you, that's good to know. However, I think that granting only the "delete" right without "undelete" (and thus without the ability to view deleted content) would create new issues, too. People with that delete-only right couldn't review their own deletions (except if it would be possible and allowed to let them only view content they deleted themselves?)... Gestumblindi (talk) 09:19, 2 January 2024 (UTC)
- Support of course Юрий Д.К 19:34, 3 January 2024 (UTC)
- Oppose, unless image-reviewers get vetted in the same way as administrators I don't see why they should be able to delete files. Having more eyes on files can help, the issue with the current system isn't that it's a bad system per se, rather it's understaffed. Perhaps we could split administrators into more user groups in the future (in fact, I very much encourage it), but the two (2) user rights of blocking people / accounts and deleting pages are the only rights that need to be exclusive to administrators. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 00:01, 8 January 2024 (UTC)
- @Donald Trung not only understaffed, but also very contentious deletion requests involving copyright on objects the photos or videos show. In my personal perspective, much of the deletion discussions on freedom of panorama are actually avoidable, if the FoP rules of more than 100 countries we treat today as having no-FoP become fit for new media/Internet age. Therefore, there is substantial lesser number of deletion requests to deal with as the likes of Burj Khalifa, Wisma 46, Bayterek Tower, Malacañan Palace, or N Seoul Tower would have become acceptable for commercial license hosting here. Perhaps the remaining DRs may concern public monuments and landmarks from countries that seem anti-FoP, like France, Costa Rica, Argentina, and Ukraine. This is just my personal POV regarding the great number of DRs that are actually avoidable. JWilz12345 (Talk|Contrib's.) 00:58, 8 January 2024 (UTC)
- @JWilz12345: I'm sorry, maybe I missed your point, but are you just saying that we'd have fewer DRs if more countries had liberal Freedom of Panorama? Or are you saying something else? In particular, are you saying something that has bearing on this proposal? - Jmabel ! talk 05:19, 8 January 2024 (UTC)
- @Jmabel that is just my insight, and yes a substantial share of DRs concerns derivative works, and a share of DW DRs concerns FoP-related issues. Before it was common to nominate Russian buildings and Belgian monuments, but ever since more liberal FoP rules were implemented in both countries, there is little share of DRs concerning Russian buildings and Belgian monuments. There is a slight reduction of the number of DRs (improper DRs targeting works can be speedily kept) as a result, slightly reducing some backlog being experienced. I have seen some of the most-overused DRs here, concerning: Louvre Pyramid and Hassan II Mosque (but I don't expect France and Morocco will embrace Wikimedia-friendly FoP rules anytime soon). JWilz12345 (Talk|Contrib's.) 10:05, 8 January 2024 (UTC)
- We can only follow the law, not write it. I'm sure that at least 95% (ninety-five percent) of contributors would want more liberal copyright ©️ laws to allow more educational content, but the truth is that pro-FoP lobbying is slow and oftentimes unproductive. As much as I would want all of us to become more politically active and create more lobbying organisations (in fact, not too long ago I proposed the creation of "Commons:Lobby" to organise such actions), admins must enforce these laws and these images may not be hosted publicly until the laws change (then we can undelete entire categories of images). --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 22:37, 9 January 2024 (UTC)
- @Jmabel that is just my insight, and yes a substantial share of DRs concerns derivative works, and a share of DW DRs concerns FoP-related issues. Before it was common to nominate Russian buildings and Belgian monuments, but ever since more liberal FoP rules were implemented in both countries, there is little share of DRs concerning Russian buildings and Belgian monuments. There is a slight reduction of the number of DRs (improper DRs targeting works can be speedily kept) as a result, slightly reducing some backlog being experienced. I have seen some of the most-overused DRs here, concerning: Louvre Pyramid and Hassan II Mosque (but I don't expect France and Morocco will embrace Wikimedia-friendly FoP rules anytime soon). JWilz12345 (Talk|Contrib's.) 10:05, 8 January 2024 (UTC)
- @JWilz12345: I'm sorry, maybe I missed your point, but are you just saying that we'd have fewer DRs if more countries had liberal Freedom of Panorama? Or are you saying something else? In particular, are you saying something that has bearing on this proposal? - Jmabel ! talk 05:19, 8 January 2024 (UTC)
- @Donald Trung not only understaffed, but also very contentious deletion requests involving copyright on objects the photos or videos show. In my personal perspective, much of the deletion discussions on freedom of panorama are actually avoidable, if the FoP rules of more than 100 countries we treat today as having no-FoP become fit for new media/Internet age. Therefore, there is substantial lesser number of deletion requests to deal with as the likes of Burj Khalifa, Wisma 46, Bayterek Tower, Malacañan Palace, or N Seoul Tower would have become acceptable for commercial license hosting here. Perhaps the remaining DRs may concern public monuments and landmarks from countries that seem anti-FoP, like France, Costa Rica, Argentina, and Ukraine. This is just my personal POV regarding the great number of DRs that are actually avoidable. JWilz12345 (Talk|Contrib's.) 00:58, 8 January 2024 (UTC)
- It's just my opinion as a lay person, but there's at least a couple of countries outside of the United States where users could embrace fair use if they wanted to. There just doesn't seem to be any will on their part to do it though. Understandably, because it's much easier to just upload images here and then blame other people if they are deleted then put the time and effort into managing things themselves on their end. I'm sure there's plenty of countries out there were we (or more importantly Wikipedia) could take a much more lax stance without running into problems if there was just the will to do it though. 99% untested and extremely low risk to begin with anyway. Except religions need their theologies. --Adamant1 (talk) 23:06, 9 January 2024 (UTC)
- @Donald Trung a good start is a page I started at meta-wiki: meta:Freedom of Panorama. It should begin kicking off things that pro-liberal FoP advocate need. Anyone can also contribute that page. JWilz12345 (Talk|Contrib's.) 23:10, 9 January 2024 (UTC)
- @Adamant1: Certainly there are places where we could get away with a lot, especially for use only within an educational project like Wikipedia. But (at least as far as Commons is concerned) that's not the point. The point is that for images that are copyrighted we try to confine ourselves to images where, as long as reusers comply with the offered license, they (the reusers) won't be in trouble, not just that we won't be in trouble.
- If we really wanted to change this policy: the one thing we could, in principle, change would be to allow some content with NC licenses. There are many countries that have FoP for non-commercial use, even though they limit commercial use. But I also understand why, early on, we decided not to allow NC licenses: we wanted to encourage people to use freer licenses than that. I'd guess that many of our larger contributors of original work would opt for NC if they could stay involved in the project and stick to NC licenses. I probably would: I'm sure I've cost myself thousands of dollars by offering such free licenses on all of my work. Of course, I've also made that work tremendously more available, and given it a far wider reach. - 19:05, 10 January 2024 (UTC) — Preceding unsigned comment added by Jmabel (talk • contribs)
- Comment IIRC, in previous discussions (here? somewhere else?) there was an issue of the WMF being unwilling to separate delete from undelete, and for legal reasons, we can't grant undelete to users who have not passed some RfA like process. GMGtalk 14:07, 12 January 2024 (UTC)
no include categories for DR[edit]
Is there a way on how to add categories with the no include for DRs with hot cat? I have been helping out in adding categories to DRs with hot cat but there no such category appears. Maybe there is a hidden category for it? If not, is there another solution? Paradise Chronicle (talk) 22:32, 30 December 2023 (UTC)
- In 2017 I believe a solution to the issue was requested before, but there was no answer.Paradise Chronicle (talk) 13:28, 31 December 2023 (UTC)
- @Paradise Chronicle yes, indeed no responses before archival. JWilz12345 (Talk|Contrib's.) 11:25, 4 January 2024 (UTC)
- Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:43, 4 January 2024 (UTC)
- Support Quite sensible. --Yann (talk) 11:51, 4 January 2024 (UTC)
- Support Although I'm not exactly sure what's being supported, but whatever. It sounds like a good idea regardless. --Adamant1 (talk) 12:01, 4 January 2024 (UTC)
- Strong support, so that I do not need to resort to two tedious things: copy a certain
<noinclude>XXXXX FOP cases/yyyyy</noinclude>
and paste it to DR pages while having the JavaScript of my mobile browser turned off (to avoid any issues in text formatting as the Wiki text editor seems to treat a few types of copied texts as formatted text and not plaintext). Or, in launching deletion requests, forced to select "edit source" and type the same category wiki-code. JWilz12345 (Talk|Contrib's.) 12:41, 4 January 2024 (UTC) - Support That is a technical request and thus should go into phabricator, the technical requests page and/or Commons:Idea Lab. Prototyperspective (talk) 14:15, 4 January 2024 (UTC)
- HotCat is a Javascript tool created and maintained locally at Commons. It isn't part of MediaWiki, and changes to it don't require intervention by a WMF developer. Omphalographer (talk) 05:27, 6 January 2024 (UTC)
- Comment Just to make sure I understand: (1) any time a category is added to an individual DR with HotCat, we always want it inside of a
<noinclude>
element and (2) We can identify a page as a DR because its name begins with "Commons:Deletion requests/" and what follows that is not of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit) or (to cover translations of Commons:Deletion requests) 'aa' or 'aaa' (where each 'a' is one of the 26 lowercase letters in original ASCII). Are there other exceptions that would need to be made besides those five forms? - Jmabel ! talk 20:29, 4 January 2024 (UTC)- @Jmabel: I don't see a use case for live cats in pages with what follows of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:50, 5 January 2024 (UTC)
- I was about to answer but also afraid to show off my ignorance. Now that Jeff G. also doesn't seem to know I have some courage and admit I am afraid I can't answer you part two of the question. I am editing mainly in visual mode and even after your explanation I have no idea what "dddd/dd" means. But I would be very glad to have categories that already have the .... and are detectible with hot cat so I do not have to resort to the several editing steps similar as described by JWilz12345. Paradise Chronicle (talk) 06:59, 5 January 2024 (UTC)
- @Paradise Chronicle: I know what most of them are, I just don't see the use case. For instance, Commons:Deletion_requests/2016 appears to be a badly-named one-off, Commons:Deletion requests/2024/01 contains this month's active DRs, Commons:Deletion requests/2024/01/05 contains today's active DRs, and Commons:Deletion requests/Archive/2024/01/04 contains the DRs started yesterday and already archived because the subject page(s) were speedily kept or speedily deleted. Tracking down why pages like Commons:Deletion requests/2024/01 are categorized is an exercise best left to the reader (historically, this is because people are not as careful with noinclude as JWilz12345 is). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 07:36, 5 January 2024 (UTC)
- @Jeff G.: do I understand that you are saying that, functionally, these exceptions are unnecessary, because it would be fine if the rule of adding a
<noinclude>
element also applied to these? That's fine with me. Might this even be OK to apply this to the language-specific pages? I think it would be. The original proposal was specific to DRs, and I was concerned with how you could technically identify a DR. But, yes, it's simplest if you can just say that anything that begins with "Commons:Deletion requests/" follows this rule. - Jmabel ! talk 19:59, 5 January 2024 (UTC)- @Jmabel: Yes, it seems so. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:14, 6 January 2024 (UTC)
- @Jeff G.: do I understand that you are saying that, functionally, these exceptions are unnecessary, because it would be fine if the rule of adding a
- @Paradise Chronicle: I know what most of them are, I just don't see the use case. For instance, Commons:Deletion_requests/2016 appears to be a badly-named one-off, Commons:Deletion requests/2024/01 contains this month's active DRs, Commons:Deletion requests/2024/01/05 contains today's active DRs, and Commons:Deletion requests/Archive/2024/01/04 contains the DRs started yesterday and already archived because the subject page(s) were speedily kept or speedily deleted. Tracking down why pages like Commons:Deletion requests/2024/01 are categorized is an exercise best left to the reader (historically, this is because people are not as careful with noinclude as JWilz12345 is). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 07:36, 5 January 2024 (UTC)
- I was about to answer but also afraid to show off my ignorance. Now that Jeff G. also doesn't seem to know I have some courage and admit I am afraid I can't answer you part two of the question. I am editing mainly in visual mode and even after your explanation I have no idea what "dddd/dd" means. But I would be very glad to have categories that already have the .... and are detectible with hot cat so I do not have to resort to the several editing steps similar as described by JWilz12345. Paradise Chronicle (talk) 06:59, 5 January 2024 (UTC)
- @Jmabel: I don't see a use case for live cats in pages with what follows of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:50, 5 January 2024 (UTC)
- Support, but is it already possible for the software to automatically add the "Noinclude" tags whenever someone adds a category via HotCat? Does this already exist elsewhere? It's extremely annoying to always have to do this manually. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 23:55, 7 January 2024 (UTC)
Delete or deprecate or modify Drfop template[edit]
Several users like Ooligan expressed concerns or reservations on FoP-related deletion requests that contain wording inspired from {{Drfop}} (for Ooligan's concern, at Commons:Deletion requests/Files in Category:Transfiguration Cathedral, Donetsk). The wordings made by nominators are not comprehensive and not detailed enough to show what is the problem of a certain public monument accused to be unfree for commercial CC/PD licensing here. The template itself appears to be live, and can be used in the nominations even if it only provides country name as the sole parameter.
I am proposing now to either take down this template or at least deprecate/modify it, so that nominators are now forced to explain the FoP-related problems with greater detail. While this may be in use in thousands of DRs from 2010s, I suggest simply copy-pasting the content of the template to those DRs transcluding it to avoid abrupt content loss in those DRs, and take down the template afterwards, if the consensus leans towards nuking this template entirely. Ping two users who debated at the template's talk page: @Bluerasberry and Jameslwoodward: . JWilz12345 (Talk|Contrib's.) 23:49, 1 January 2024 (UTC)
- Given that this template has been so heavily used, any substantive changes to it should certainly first involve subst'ing existing uses of the template (or converting them to use an archived version). It is important to preserve what was actually said in a deletion debate, not to have it changed by a later template change.
- Yes, this template could be greatly improved, and I'd have no problem with making this version archival and coming up with something that gave a better explanation of the issues at hand. That said, though: the norm is, indeed, that derivative works violate copyrights. FoP is a widespread exception, not a norm. Even in countries that have "strong FoP" it can have weird limitations (e.g. the Germans not allowing aerial photos under their FoP). - Jmabel ! talk 00:42, 2 January 2024 (UTC)
- Comment I previously stated that I felt the template gives unclear information. I continue to feel that it should lead with a statement of concern. I am ready to comment on any modification proposal. There are 1000+ transclusions, so someone is using this. Bluerasberry (talk) 00:58, 2 January 2024 (UTC)
- @Jmabel and Bluerasberry: I modified now my message accordingly. JWilz12345 (Talk|Contrib's.) 02:05, 2 January 2024 (UTC)
Note, by the way, that I wrote the template in the first place.
I don't think the subject DR is a good case to cause discussion of changing the template. The DR discusses a building that was destroyed and then rebuilt. It concludes incorrectly, that the replacement does not have a copyright except for the parts that are different from the original. Of course we know that if a modern artist copies a Rembrandt, the resulting work has a new copyright. This is emphasized by the fact that before Bridgeman, even photographs of a Rembrandt had a copyright.
Note also, that the DR is not an FoP issue -- there is no FoP in Ukraine, so perhaps we need a template summarizing the status of a DW to use at the beginning of DRs of DWs.
Of the 125 countries that we see most often on Commons, 65 have no FoP (see User:Jameslwoodward/Sandbox2. I don't understand what Bluerasberry's "statement of concern" might be -- perhaps they would be good enough to write it out here? . Jim . . . (Jameslwoodward) (talk to me) 14:27, 2 January 2024 (UTC)
- I followed up on the template talk page. Bluerasberry (talk) 19:41, 2 January 2024 (UTC)
[edit]
Large categories, such as Category:Scans from the Internet Archive, pose an issue when users click the category link from a file page like File:The Gull (IA v17n1gullv17ngold).pdf. Currently, it always starts from the first file in the category. However, users are more likely to want to see files around the current file. Therefore, can we modify the link to direct users to Category:Scans_from_the_Internet_Archive&filefrom=v17n1gullv17ngold? This adjustment would provide more relevant file links for the user.
To implement this, I propose the introduction of a MediaWiki magic word like __STARTFROMCURRENTPAGE__. When added to category pages, this magic word would ensure that when users click the category link from a file or other types of pages, it will start from the page's sort key.
It's important to note that Wikimedia Commons differs from Wikipedia, as pages are not interlinked. Consequently, many pages are not indexed by Google due to a lack of links from other pages. Implementing this change and allowing /w/index.php?title=Category in robots.txt would create more interlinks, potentially leading to increased file indexing.
維基小霸王 (talk) 02:50, 2 January 2024 (UTC)
- Since this feature would require changes to MediaWiki, you should probably ask at m:Phabricator, not here.
- For what it's worth, this change would likely make search indexing worse, not better - each file would link to a slightly different page within the category, creating a larger number of redundant pages to be indexed. Omphalographer (talk) 19:31, 2 January 2024 (UTC)
- It is pretty easy to add sane navigation to the category page, if the images named (or sorted by sortkey) after a pattern that imposes order. - Jmabel ! talk 23:57, 2 January 2024 (UTC)
- This is possible for middle-sized categories, but not for very big categories like what I have mentioned. [2] 維基小霸王 (talk) 01:18, 3 January 2024 (UTC)
- I can see why that would be tough at that scale. So basically, what you'd want is to be able to set things up so that if the file's sortkey (by default the filename) is FOO and it is in Category:BAR, you'd like an easy way to get to
https://commons.wikimedia.org/w/index.php?title=Category:BAR&filefrom=FOO
. I'm not 100% sure that is desirable as default behavior, but I can see why it would be nice to have a choice of that mode. I think it should be possible to achieve that client-side with a user script. - Jmabel ! talk 02:31, 3 January 2024 (UTC) - Part of the problem here seems to be that these files have DEFAULTSORT set to unhelpful values (the Internet Archive file ID). Removing those might improve matters. Omphalographer (talk) 02:32, 3 January 2024 (UTC)
- Yes, if you are not sorting the category in the order you want it, you'll have quite a problem getting what you want. On the other hand, I think that particular DEFAULTSORT is going to keep the pages of a book together pretty much as you'd like them to be.
- In the example I gave above, the HTML for the category link would currently be
<a href="/wiki/Category:BAR" title="FOO">BAR</a>
, which is pretty tractable to massage in script if what you want is to produce<a href="/w/index.php?title=Category:BAR&filefrom=FOO" title="FOO">BAR</a>
. Jmabel ! talk 03:20, 3 January 2024 (UTC)
- I can see why that would be tough at that scale. So basically, what you'd want is to be able to set things up so that if the file's sortkey (by default the filename) is FOO and it is in Category:BAR, you'd like an easy way to get to
- This is possible for middle-sized categories, but not for very big categories like what I have mentioned. [2] 維基小霸王 (talk) 01:18, 3 January 2024 (UTC)
- @Omphalographer: I guess Phabricator would require a local consensus first?
- Presently, only the first page of every category was allowed to index. Maybe more should be allowed to index on Wikimedia Commons for more links to files. Any better ideas? 維基小霸王 (talk) 01:19, 3 January 2024 (UTC)
- It is pretty easy to add sane navigation to the category page, if the images named (or sorted by sortkey) after a pattern that imposes order. - Jmabel ! talk 23:57, 2 January 2024 (UTC)
- I like this idea. -- Tuválkin ✉ ✇ 05:08, 6 January 2024 (UTC)
- So I think no one will object if I propose a magic word on Phabricator to make optional start from the page's sort key? --維基小霸王 (talk) 02:32, 7 January 2024 (UTC)
- I think you'd do better to indicate simply that you want a way to go into a category and start from the page's sort key, rather than dictate to the developers how you want it done. As I said above, I think it would be pretty simple to do this client-side with a user script, so it may just be a "gadget". - Jmabel ! talk 03:34, 7 January 2024 (UTC)
- So I think no one will object if I propose a magic word on Phabricator to make optional start from the page's sort key? --維基小霸王 (talk) 02:32, 7 January 2024 (UTC)
"Slideshows" category on main page[edit]
In the "Content" box, By Type section, I think that we should have a category for slideshows. I think this would reinforce the culture of CC presentations and motivate in properly categorising presentations. Right now, Category:Presentations is very disorganised and includes photos and videos from presentations. I think that being able to access the slideshow files would be beneficial. Egezort (talk) 21:26, 4 January 2024 (UTC)
- I agree! Sometimes is difficult to find even slideshows about Wikimedia events! Theklan (talk) 21:39, 4 January 2024 (UTC)
[edit]
- Visit https://commons.m.wikimedia.org/ with your cellphone, and login.
- On the Commons home page, you will notice a big blue button in the middle of the screen: "Upload".
- Now tap the person icon in the upper right.
- You will see in the menu "contributions" but no "Upload".
In step 2 we have learned that Upload is a very important function. But for no good reason one cannot check ones uploads from the mobile menu. One needs the desktop menu, or entering the Uploads URL directly. Jidanni (talk) 04:00, 6 January 2024 (UTC)
- I may be confused, but isn't #2 about uploading a file and #4 about seeing the Special:MyUploads page? Are you saying it should be possible to start an upload from any page, or that it should be possible to easily see Special:MyUploads? - Jmabel ! talk 07:45, 6 January 2024 (UTC)
Ban the output of generative AIs[edit]
Now we know that Artificial Intelligences are being trained on modern nonfree works. Please read this: Generative AI Has a Visual Plagiarism Problem > Experiments with Midjourney and DALL-E 3 show a copyright minefield, by Gary Marcus and Reid Southen — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 10:11, 9 January 2024 (UTC)
- Support At least if the output is generated by Midjourney, if not also Dall-E. Although the later seems to be less susceptible to it, but at the end of day both were trained on nonfree works. So there's a risk of creating duratives with either one. It's not like we can't allow for images generated by models that were trained on free licensed images if or when there are any either. But allowing from a model that clearly disregards copyright, apparently even when someone uses a benign prompt, is just asking for trouble. Not to mention it's also antithetical to the projects goals. I don't think a full ban on anything generated by AI what-so-ever, regardless of the model or type of output, would really be workable though. At the end of day things like image up-scaling and colorization are probably not harmful enough to justify banning them. --Adamant1 (talk) 10:39, 9 January 2024 (UTC)
- Strong Support as proposer, obviously. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 10:53, 9 January 2024 (UTC)
- Strong oppose for a general ban on everything that an AI is involved, as the title of this section might suggest. I doubt that "AI" denoising or sharpening can cause a copyright problem. AI colorization or AI upscaling yields mostly very poor results, but I cannot see the copyright problems either. I don't mind if images created by generative AI are excluded that are just based on a text prompt, possibly with very few exceptions that are needed to illustrate pages about AI. However, was an actual copyright problem identified with current AI-based uploads to Commons that is so serious or general that this requires a blanket ban for generative AI? I know, much of this might be out of scope anyway. --Robert Flogaus-Faust (talk) 11:32, 9 January 2024 (UTC)
- @Robert Flogaus-Faust: There's been several DRs lately involving clear duratives, including Commons:Deletion requests/Files found with insource:" happy-looking Gandalf". One of the problems here is that people who are pro AI artwork will turn every DR having to do with into an argument over AI models can't generate COPYVIO to begin with because of how many images they are trained on. It's also sort of impossible to know what is or isn't COPYVIO with AI art generators because we don't have access to the original training sets. So take something like a seemingly benign painting of a 15th century knight. We have zero way of knowing if it's an exact copy of prior artwork, a derivative of one made in the 15th century, or based on a modern painting that's still copyrighted. Since there's no source or any other way to confirm anything. The fact that there's clear instances of AI art generators creating derivatives even when people ask for them just puts the whole thing in doubt though. --Adamant1 (talk) 11:50, 9 January 2024 (UTC)
- What you call clear duratives are images that look not at all like Gandalf but that word was used in the prompt alongside other changes to get the AI to not create evil looking Asian people with Samurai-style hats but to create old men with wizard hats. That word is often used in high-quality fan art centric to the concept of the kind of wizard I wanted so I used that as a technique to make it produce images that more closely resemble contemporary ideas of what wizards are. And no, that they can't generate COPYVIO to begin with is not what I or anybody else I saw ever argued which should be even clearer in the explanation below. They can and such images should be deleted and have been deleted. Prototyperspective (talk) 12:50, 9 January 2024 (UTC)
- @Robert Flogaus-Faust: There's been several DRs lately involving clear duratives, including Commons:Deletion requests/Files found with insource:" happy-looking Gandalf". One of the problems here is that people who are pro AI artwork will turn every DR having to do with into an argument over AI models can't generate COPYVIO to begin with because of how many images they are trained on. It's also sort of impossible to know what is or isn't COPYVIO with AI art generators because we don't have access to the original training sets. So take something like a seemingly benign painting of a 15th century knight. We have zero way of knowing if it's an exact copy of prior artwork, a derivative of one made in the 15th century, or based on a modern painting that's still copyrighted. Since there's no source or any other way to confirm anything. The fact that there's clear instances of AI art generators creating derivatives even when people ask for them just puts the whole thing in doubt though. --Adamant1 (talk) 11:50, 9 January 2024 (UTC)
- Strong oppose That article is about what one could call 'hacking' generative AIs to reproduce parts of works they trained on. Such malicious images are difficult to create, rare, and should simply be deleted.
- Moreover, training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that AI works are not copyrighted. Like Photoshop or Lightroom, it's a new tool people can use in many ways and with very different results. It's a great boon to the public domain and not "antithetical to the projects goals" but matching it where it's finally starting to become possible to create good-quality images of nearly everything you can imagine without very high technical artistic skills. Stable Diffusion is open source and has been trained on billions of images to understand concepts in prompts to it. Prototyperspective (talk) 11:41, 9 January 2024 (UTC)
- training on nonfree works is allowed Companies can train models on nonfree works all they want. That doesn't mean we should allow for images that are highly likely to be based on copyrighted works though. I'm not going to repeate myself, but see my reply to Robert Flogaus-Faust for why exactly I think it's such an issue. The gist of it though is that AI works are copyrighted when they are based on (or exact copies of) copyrighted works and we just have zero way of knowing when that the case because we don't have access to what images the models were trained on. So it's just as likely that a painting of a historical figure would be based on newer copyrighted works then older free licensed ones. If anything, there's more chance since there's less images of historical figures the further back you go. There's just no way of us knowing or checking regardless though. At least with normal artwork we know who created it, what it was inspired by, and where it came from. None of that is true with AI artwork. An image has no business being on Commons if there's no source or at least a description of what it's based on. Period.--Adamant1 (talk) 11:58, 9 January 2024 (UTC)
- They are not based on individual images with few exceptions that the link in the original post is about and that I addressed in my explanations. You also learn concept such as 'what is a rhinoceros' from your visual experience. Do you think if you never saw a real rhinoceros and all you ever saw was copyrighted films of such an image you created based on your knowledge gained through these films would be copyright violations? I don't need to clarify that they aren't since multiple entities have done so. As said, cases where it maliciously usually deliberately replicates some image should be deleted and are rare. Prototyperspective (talk) 12:22, 9 January 2024 (UTC)
- No offense, but your comparison of AI generators to humans and how they learn or create things is just a ridiculously bad faithed, dishonest way to frame the technology. It's also not a valid counter to anything I've said. We still require a source when someone uploads artwork created by a human and neither a prompt or what AI generator the image was created by qualifies as one. Period. --Adamant1 (talk) 12:40, 9 January 2024 (UTC)
- No, we don't list the visual experiences and inspirations and so on for artworks entirely made manually by humans. You seem to have bad faith against my explanations where "ridiculously bad faithed" doesn't even make sense. Just calling it "not a valid counter" isn't a good point. Prototyperspective (talk) 12:46, 9 January 2024 (UTC)
- No offense, but your comparison of AI generators to humans and how they learn or create things is just a ridiculously bad faithed, dishonest way to frame the technology. It's also not a valid counter to anything I've said. We still require a source when someone uploads artwork created by a human and neither a prompt or what AI generator the image was created by qualifies as one. Period. --Adamant1 (talk) 12:40, 9 January 2024 (UTC)
- They are not based on individual images with few exceptions that the link in the original post is about and that I addressed in my explanations. You also learn concept such as 'what is a rhinoceros' from your visual experience. Do you think if you never saw a real rhinoceros and all you ever saw was copyrighted films of such an image you created based on your knowledge gained through these films would be copyright violations? I don't need to clarify that they aren't since multiple entities have done so. As said, cases where it maliciously usually deliberately replicates some image should be deleted and are rare. Prototyperspective (talk) 12:22, 9 January 2024 (UTC)
- @Prototyperspective: What have Midjourney and Dall-E been trained on, hmmm? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:03, 9 January 2024 (UTC)
- Also billions of images. Since you didn't address what I wrote about it I'll just quote it to avoid walls of text creating circular repetitions: training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions [public television etc etc]) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that… Prototyperspective (talk) 12:15, 9 January 2024 (UTC)
- The difference is that a normal user will be banned if they repetitiously create and upload derivative works. Yet, apparently, if an AI generator has a history of creating COPYVIO that's perfectly fine "because technology." It's really just glorified meat puppeting though and your only response seems to be acting like it's not an issue when there's a plethora of evidence to the contrary. --Adamant1 (talk) 12:22, 9 January 2024 (UTC)
- These are not derivative works and text2image generators who similar to humans learned concepts through visual learning do not produce copyright violations by default. You want to a novel art tool "because technology" and I explained why it's unreasonable and why nothing backs your unfounded conclusions while subject-level authoritative entities have clarified these are not copyvios. It's glorified avoidance of new technical capacities for no good reason. Prototyperspective (talk) 12:26, 9 January 2024 (UTC)
- Which images aren't derivatives? The ones in the article that Jeff linked to clearly are, and no one even ask them in that case. So you can stick your fingers in your ears about it, but AI generators clearly produce copyrighted works. And no I don't want to "ban a novel art tool because technology." I've multiple times that we should allow for AI generators that are trained on freely licensed images. So I'd appreciate it if you didn't misconstrue my position. Your the only one taking an extreme, all or nothing position on this. --Adamant1 (talk) 12:30, 9 January 2024 (UTC)
- we should allow for AI generators that are trained on freely licensed images Such in the sense of being useful are impossible and it will remain like that for a few a decades if not much longer. Which images aren't derivatives? Images made via Stable Diffusion, Midjourney & Co except for images like in the links which I addressed, not ignored, with such "malicious images are difficult to create, rare, and should simply be deleted". Prototyperspective (talk) 12:44, 9 January 2024 (UTC)
- I beg to differ. There's also iStock's AI generator. And your the one saying I don't understand or have experience with the technology. Regardless, both create perfectly good quality images that I assume would be safe to upload and I'm sure there's others. So it would be perfectly reasonable to only allow artwork from models that were trained on freely licensed images with where the technology is at right now. --Adamant1 (talk) 12:52, 9 January 2024 (UTC)
- Those are not freely licensed.
- Not sure why you advocate for these commercial proprietary AI models. Stock images are usually not accurate and/or creative depictions of things either and details about NVIDIA Picasso remain unknown. Prototyperspective (talk) 13:00, 9 January 2024 (UTC)
- I don't care if the underlining technology is freely licensed. That's not the issue. If people can use the images without having to worry about violating someone else's copyright is and per Getty Images website images created with their software are "commercially‑safe—no intellectual property or name and likeness concerns, no training data concerns." Which is what's important here. Not if the underlining software is open source or whatever --Adamant1 (talk) 13:06, 9 January 2024 (UTC)
- The images trained on are not freely licensed. I do see how you don't care about open source but that isn't what I meant. --Prototyperspective (talk) 13:09, 9 January 2024 (UTC)
- I don't care if the underlining technology is freely licensed. That's not the issue. If people can use the images without having to worry about violating someone else's copyright is and per Getty Images website images created with their software are "commercially‑safe—no intellectual property or name and likeness concerns, no training data concerns." Which is what's important here. Not if the underlining software is open source or whatever --Adamant1 (talk) 13:06, 9 January 2024 (UTC)
- I beg to differ. There's also iStock's AI generator. And your the one saying I don't understand or have experience with the technology. Regardless, both create perfectly good quality images that I assume would be safe to upload and I'm sure there's others. So it would be perfectly reasonable to only allow artwork from models that were trained on freely licensed images with where the technology is at right now. --Adamant1 (talk) 12:52, 9 January 2024 (UTC)
- we should allow for AI generators that are trained on freely licensed images Such in the sense of being useful are impossible and it will remain like that for a few a decades if not much longer. Which images aren't derivatives? Images made via Stable Diffusion, Midjourney & Co except for images like in the links which I addressed, not ignored, with such "malicious images are difficult to create, rare, and should simply be deleted". Prototyperspective (talk) 12:44, 9 January 2024 (UTC)
- Which images aren't derivatives? The ones in the article that Jeff linked to clearly are, and no one even ask them in that case. So you can stick your fingers in your ears about it, but AI generators clearly produce copyrighted works. And no I don't want to "ban a novel art tool because technology." I've multiple times that we should allow for AI generators that are trained on freely licensed images. So I'd appreciate it if you didn't misconstrue my position. Your the only one taking an extreme, all or nothing position on this. --Adamant1 (talk) 12:30, 9 January 2024 (UTC)
- These are not derivative works and text2image generators who similar to humans learned concepts through visual learning do not produce copyright violations by default. You want to a novel art tool "because technology" and I explained why it's unreasonable and why nothing backs your unfounded conclusions while subject-level authoritative entities have clarified these are not copyvios. It's glorified avoidance of new technical capacities for no good reason. Prototyperspective (talk) 12:26, 9 January 2024 (UTC)
- The difference is that a normal user will be banned if they repetitiously create and upload derivative works. Yet, apparently, if an AI generator has a history of creating COPYVIO that's perfectly fine "because technology." It's really just glorified meat puppeting though and your only response seems to be acting like it's not an issue when there's a plethora of evidence to the contrary. --Adamant1 (talk) 12:22, 9 January 2024 (UTC)
- Also billions of images. Since you didn't address what I wrote about it I'll just quote it to avoid walls of text creating circular repetitions: training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions [public television etc etc]) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that… Prototyperspective (talk) 12:15, 9 January 2024 (UTC)
- training on nonfree works is allowed Companies can train models on nonfree works all they want. That doesn't mean we should allow for images that are highly likely to be based on copyrighted works though. I'm not going to repeate myself, but see my reply to Robert Flogaus-Faust for why exactly I think it's such an issue. The gist of it though is that AI works are copyrighted when they are based on (or exact copies of) copyrighted works and we just have zero way of knowing when that the case because we don't have access to what images the models were trained on. So it's just as likely that a painting of a historical figure would be based on newer copyrighted works then older free licensed ones. If anything, there's more chance since there's less images of historical figures the further back you go. There's just no way of us knowing or checking regardless though. At least with normal artwork we know who created it, what it was inspired by, and where it came from. None of that is true with AI artwork. An image has no business being on Commons if there's no source or at least a description of what it's based on. Period.--Adamant1 (talk) 11:58, 9 January 2024 (UTC)
- Oppose largely on the basis of terminology. "AI" is a marketing buzzword and not well-enough defined to make policy around. As Robert Flogaus-Faust mentions, there are plenty of things that are called "AI" that are fine for Commons, at least from a copyright perspective. --bjh21 (talk) 12:01, 9 January 2024 (UTC)
- @Bjh21 I mean generative AIs. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:07, 9 January 2024 (UTC)
- @Jeff G.: I think even that is probably too broad. For instance it would cover GPT-4 used for machine translation. --bjh21 (talk) 12:40, 9 January 2024 (UTC)
- @Bjh21: Translation starts with a source work of the same type as the output. By contrast, generative AIs (typically that are today creating medium-resolution images) don't start with a source image; or they start with many source images, some of which are non-free. They also are not notable artists. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 16:42, 10 January 2024 (UTC)
- @Jeff G.: I don't really understand this field, but en:Generative artificial intelligence defines generative AI as "artificial intelligence capable of generating text, images, or other media, using generative models," and mentions GPT-4 as an example (it even has the word in its name). en:Machine translation notes that "one can also directly prompt generative large language models like GPT to translate a text." This leads me to some concern that banning all output of generative AIs might exclude large classes of use that aren't problematic. But maybe machine translation by generative AI is problematic; I don't know. --bjh21 (talk) 17:25, 11 January 2024 (UTC)
- @Bjh21: Translation starts with a source work of the same type as the output. By contrast, generative AIs (typically that are today creating medium-resolution images) don't start with a source image; or they start with many source images, some of which are non-free. They also are not notable artists. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 16:42, 10 January 2024 (UTC)
- @Jeff G.: I think even that is probably too broad. For instance it would cover GPT-4 used for machine translation. --bjh21 (talk) 12:40, 9 January 2024 (UTC)
- @Bjh21 I mean generative AIs. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:07, 9 January 2024 (UTC)
- Comment AI generated files need to uploaded as PD, as there no sweat of brow involved and all such services are trained on materials it has found on the internet. Either that all AI generated files are not allowed because the under lying source material isnt declared, we con only accept freely sourced, where those sources are provided materials. As for some minor editing tools to adjust colours, sharpen, or remove noise those types of adjustments have always been acceptable. Gnangarra 12:12, 9 January 2024 (UTC)
- The underlying source material are billions of images for txt2img; you want to have a sorted list of thousand–billions of images listed beneath each file? e.g. Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). Prototyperspective (talk) 12:29, 9 January 2024 (UTC)
- they are only numbers they generate only based on a smaller subset as picture of a cow has influence on a picture of a flower. Clearly our images must be honest products of photographers otherwise they serve no encyclopaedic/educational purpose about the subject. Diagrams have allways covered the gap photographs cant convey. Gnangarra 12:48, 9 January 2024 (UTC)
- Just because you can't think of other potential use-cases doesn't mean there aren't some. For example illustrating art styles. There are thousands and thousand of photos of images for whatever photographable thing you can think of but yet other subjects of human culture don't seem to be worthy of benefiting from novel technology at all. I put thousand–billions there instead of billions because the images have different degrees of relevance of the image. If you generated merely an image of a cow, which wouldn't be useful, then obviously the countless labelled photographs of cows would be most-relevant to the image. Prototyperspective (talk) 12:54, 9 January 2024 (UTC)
- This isnt about potential uses I can think, this is about the movements honesty and reliability the end user must be able to trust that what is available on every project is from a reliable source. There are many endeangered species, past wars, and deceased person where we dont have photographs of. When there is no photograph we should not dishonestly present such photographs as existing. Gnangarra 13:03, 9 January 2024 (UTC)
- Agree. That's not a case for banning AI images. Btw here is an AI image depicting the extinct dodo. Prototyperspective (talk) 13:06, 9 January 2024 (UTC)
- that image is false anyway as it doesnt show the birds colourings, nor depict it in its natural environement with plant species from its habitat. My point is that when we have reliable illustrations already including colour details we dont need these images anyway, if we do then these images mislead the viewer and make a mokery of everything we strive to do in being reliable trustworthy source. Gnangarra 13:15, 9 January 2024 (UTC)
- Inaccuracies should be pointed out and also occur for manually made images. Moreover, the images can be improved via new versions and the AI software can also improve over time. There are many files in Category:Inaccurate paleoart. Lastly, for many cases we don't have such images available, images being on WMC doesn't mean they need to be used, WMC is a free useful media repository while Wikipedia is the encyclopedia, and all of what you said isn't a case for banning but for properly describing and/or deleting various files. Prototyperspective (talk) 13:33, 9 January 2024 (UTC)
- If someone wants AI generated media then they will go to the AI service of their choosing and create as and when they need, logically it allows them to grab the most upto date reconning. Gnangarra 13:49, 9 January 2024 (UTC)
- Doesn't make sense. I don't think you have much experience with these tools beyond generating very simple images overly broadly. You wouldn't also say "ah people just make a new diagram about xyz when they need it so we don't need to host it and the same goes for artworks of e.g. cubism". There clearly is an anti-AI-tools bias with lots of unfounded dismissals. Prototyperspective (talk) 14:17, 9 January 2024 (UTC)
- I have yet to see or hear a legitimate use case for most, if not all, AI images despite all your capitulating about it other then the Wikibook specifically having to do with AI. That's not to say there isn't one, but arguments like "AI artwork is educational because AI artwork is educational" are just tautological. All your doing is talking in circles while claiming other people who disagree with you are bias once in a while. Same goes repeated instance to make this about other mediums of artwork. Apparently your incapable of talking about AI artwork without deflecting or trying to change the subject for some reason. Even though it's supposedly in-scope and there's no reason to ban it. I don't think people here who think it should be moderated aren't open to alternatives, but your clearly not making a case for them. Let alone have you even proposed any. All you've done is get in the way of there being any changes to how we handle AI artwork what-so-ever. Otherwise propose something instead of just getting in the way of everyone else who's trying to deal with the issue. --Adamant1 (talk) 14:45, 9 January 2024 (UTC)
- I explained specific use-cases and the wikibook is about explaining use-cases (see "applications" in the title). Probably last reply to you here but I'm not trying to change the subject for some reason like you accuse me to. As should be clear to people reading the discussion I'm always addressing specific points in a prior comment. Interesting that you dismiss all my points in comments like this where you alleging I'm doing nothing but calling people biased or circular reasoning. Prototyperspective (talk) 14:53, 9 January 2024 (UTC)
- You really haven't. I'm pretty sure I've said it already, but they all boil down to vague handwaving about use cases that either don't exist to begin with or no one is or will use the images for. Like your claim that an image was in scope because you could use it on your personal blog that you don't even have to begin and aren't using the image for regardless. Same goes for the Jeff Koons knock off image. You claimed it could be used in a Wikipedia article, but no one is using it for that and it would probably be removed if anyone added it to an article anyway. The "uses" have to at least be realistic and ones that people will actually use the images for. You can't just invent a random, unrealistic reason to keep an image and then act everyone else is just being bias or whatever when they tell you it's not legitimate. --Adamant1 (talk) 15:01, 9 January 2024 (UTC)
- I explained specific use-cases and the wikibook is about explaining use-cases (see "applications" in the title). Probably last reply to you here but I'm not trying to change the subject for some reason like you accuse me to. As should be clear to people reading the discussion I'm always addressing specific points in a prior comment. Interesting that you dismiss all my points in comments like this where you alleging I'm doing nothing but calling people biased or circular reasoning. Prototyperspective (talk) 14:53, 9 January 2024 (UTC)
- I have yet to see or hear a legitimate use case for most, if not all, AI images despite all your capitulating about it other then the Wikibook specifically having to do with AI. That's not to say there isn't one, but arguments like "AI artwork is educational because AI artwork is educational" are just tautological. All your doing is talking in circles while claiming other people who disagree with you are bias once in a while. Same goes repeated instance to make this about other mediums of artwork. Apparently your incapable of talking about AI artwork without deflecting or trying to change the subject for some reason. Even though it's supposedly in-scope and there's no reason to ban it. I don't think people here who think it should be moderated aren't open to alternatives, but your clearly not making a case for them. Let alone have you even proposed any. All you've done is get in the way of there being any changes to how we handle AI artwork what-so-ever. Otherwise propose something instead of just getting in the way of everyone else who's trying to deal with the issue. --Adamant1 (talk) 14:45, 9 January 2024 (UTC)
- Doesn't make sense. I don't think you have much experience with these tools beyond generating very simple images overly broadly. You wouldn't also say "ah people just make a new diagram about xyz when they need it so we don't need to host it and the same goes for artworks of e.g. cubism". There clearly is an anti-AI-tools bias with lots of unfounded dismissals. Prototyperspective (talk) 14:17, 9 January 2024 (UTC)
- If someone wants AI generated media then they will go to the AI service of their choosing and create as and when they need, logically it allows them to grab the most upto date reconning. Gnangarra 13:49, 9 January 2024 (UTC)
- Inaccuracies should be pointed out and also occur for manually made images. Moreover, the images can be improved via new versions and the AI software can also improve over time. There are many files in Category:Inaccurate paleoart. Lastly, for many cases we don't have such images available, images being on WMC doesn't mean they need to be used, WMC is a free useful media repository while Wikipedia is the encyclopedia, and all of what you said isn't a case for banning but for properly describing and/or deleting various files. Prototyperspective (talk) 13:33, 9 January 2024 (UTC)
- that image is false anyway as it doesnt show the birds colourings, nor depict it in its natural environement with plant species from its habitat. My point is that when we have reliable illustrations already including colour details we dont need these images anyway, if we do then these images mislead the viewer and make a mokery of everything we strive to do in being reliable trustworthy source. Gnangarra 13:15, 9 January 2024 (UTC)
- Agree. That's not a case for banning AI images. Btw here is an AI image depicting the extinct dodo. Prototyperspective (talk) 13:06, 9 January 2024 (UTC)
- This isnt about potential uses I can think, this is about the movements honesty and reliability the end user must be able to trust that what is available on every project is from a reliable source. There are many endeangered species, past wars, and deceased person where we dont have photographs of. When there is no photograph we should not dishonestly present such photographs as existing. Gnangarra 13:03, 9 January 2024 (UTC)
- Just because you can't think of other potential use-cases doesn't mean there aren't some. For example illustrating art styles. There are thousands and thousand of photos of images for whatever photographable thing you can think of but yet other subjects of human culture don't seem to be worthy of benefiting from novel technology at all. I put thousand–billions there instead of billions because the images have different degrees of relevance of the image. If you generated merely an image of a cow, which wouldn't be useful, then obviously the countless labelled photographs of cows would be most-relevant to the image. Prototyperspective (talk) 12:54, 9 January 2024 (UTC)
- they are only numbers they generate only based on a smaller subset as picture of a cow has influence on a picture of a flower. Clearly our images must be honest products of photographers otherwise they serve no encyclopaedic/educational purpose about the subject. Diagrams have allways covered the gap photographs cant convey. Gnangarra 12:48, 9 January 2024 (UTC)
- @Gnangarra only A.I. art in countries that follow U.S. jurisprudence may be allowed to be hosted here. But not UK A.I. art: see this. JWilz12345 (Talk|Contrib's.) 12:32, 9 January 2024 (UTC)
- We decide Commons policy, the options are none, only if all sources are acknowledged, and only PD licenses. none these option override any US laws. The same way we apply precautionary principle, a person who generates and publishes on Commons which is in the US is subject solely to US laws. Gnangarra 12:45, 9 January 2024 (UTC)
- @Gnangarra that may be true, until a British A.I. artist files letter of complaint to Wikimedia. Files should also be free in the source country and not just U.S..
- English Wikipedia can host unfree British A.I. art though as enwiki only follows U.S. laws. JWilz12345 (Talk|Contrib's.) 14:46, 9 January 2024 (UTC)
- We decide Commons policy, the options are none, only if all sources are acknowledged, and only PD licenses. none these option override any US laws. The same way we apply precautionary principle, a person who generates and publishes on Commons which is in the US is subject solely to US laws. Gnangarra 12:45, 9 January 2024 (UTC)
- The underlying source material are billions of images for txt2img; you want to have a sorted list of thousand–billions of images listed beneath each file? e.g. Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). Prototyperspective (talk) 12:29, 9 January 2024 (UTC)
- Oppose per the precedent that we allow a human artist to view, say, 5-10 copyrighted images of a person, and then draw a portrait of that person based on the likeness they have gleaned from those copyrighted images. A generative AI has seen far more images than that, and any copyrightable portion is likely to be heavily diluted, more so than the case of the human artist. Of course, individual generations can be nominated for deletion if a close match to a specific copyrighted image can be identified or if it is clearly a derivative work of a copyrighted subject. As for the objection "what if there's some image it's copying that we don't know about", the same objection applies for human artists: "what if the artist is not honest about their sources?" -- King of ♥ ♦ ♣ ♠ 17:39, 9 January 2024 (UTC)
- the same objection applies for human artists It could just be copium, but I feel like there's a difference of scale there that makes derivatives created by humans easier to sus out then it is for AI generated images. Since at the end of the day people are working extremely small data sets that usually relate to their specific area of interest. For instance if we are talking about someone who mainly speaks Mandarin Chinese and has a history of uploading images from China, it's a pretty good bet the image in question won't be a derivative of a 1940s American cartoon character. Or we can at least ask another user who has speak the languages and/or is from China if they have seen the character before. We can't do that with AI artwork though because the dataset is essentially every single image created in last 500 years. So sure, the same problem exists regardless, but it's the difference between looking through your junk drawer to find a key versus trying to find a grain of sand in the ocean. --Adamant1 (talk) 18:27, 9 January 2024 (UTC)
- Your argument essentially argues against itself. As you say, AI learning works from pretty much the sum total of human visual arts, and doesn’t even use any particular one of those at a time. It’s highly unlikely you’ll just randomly get a copyrighted character if you don’t ask for one. Dronebogus (talk) 01:49, 12 January 2024 (UTC)
- It’s highly unlikely you’ll just randomly get a copyrighted character if you don’t ask for one. @Dronebogus: I've use Dall-E to create portraits of women and every so often it will generate one of Scarlett Johansson, even though I don't explicitly ask for images of her. So I think it either has an algorithm that favors creating images based on popular characters or people, or it just happens to have been trained on images of female celebrities from the past 20 years more then anything else. So likenesses of Scarlett Johansson just get rendered more because of how the weighting in the training model works or something. Either way, if I can generated a couple thousands portraits where a none trivial number of them look living movie stars then I don't know the same wouldn't occur for modern movie or cartoon characters. I think it naturally follows that would be the case anyway because there's inherently more images of the Simpsons out there that it was trained on then say a cartoon like Mutt and Jeff. Same goes for it rendering images of women that look like Scarlet Johansson versus Carole Lombard or for that matter just a "random" woman. --Adamant1 (talk) 10:24, 12 January 2024 (UTC)
- If it’s super obvious then you filter it out as a copyvio. This isn’t difficult. Dronebogus (talk) 12:25, 12 January 2024 (UTC)
- the same objection applies for human artists It could just be copium, but I feel like there's a difference of scale there that makes derivatives created by humans easier to sus out then it is for AI generated images. Since at the end of the day people are working extremely small data sets that usually relate to their specific area of interest. For instance if we are talking about someone who mainly speaks Mandarin Chinese and has a history of uploading images from China, it's a pretty good bet the image in question won't be a derivative of a 1940s American cartoon character. Or we can at least ask another user who has speak the languages and/or is from China if they have seen the character before. We can't do that with AI artwork though because the dataset is essentially every single image created in last 500 years. So sure, the same problem exists regardless, but it's the difference between looking through your junk drawer to find a key versus trying to find a grain of sand in the ocean. --Adamant1 (talk) 18:27, 9 January 2024 (UTC)
- Oppose much too broad. This would mean we couldn't even have examples of AI-generated artwork. I suggest reading the section beginning "That said, there are good reasons to host certain classes of AI images on Commons" at Commons talk:AI-generated media. - Jmabel ! talk 19:38, 9 January 2024 (UTC)
- Oppose (a) not all current and future models are trained with nonfree works; (b) not all models trained with nonfree works produce work that's legally considered derivative; (c) commons should follow, not lead when it comes to making decisions based on the law. Sometimes we understand the law and enact a policy that's more conservative, but in this case we'd be enacting a policy that's miles beyond any legal lines set thus far AFAIK. — Rhododendrites talk | 22:10, 9 January 2024 (UTC)
- Oppose, the exclusion of AI-generated works should be done on a case-by-case basis, not as a blanket exclusion. Good illustrative educational works that are obviously in the public domain shouldn't be grouped together with AI-generated images of Sailor Moon, Optimus Prime, and Magneto. We should judge AI-generated works on a case-by-case basis. This is still largely unregulated and current United States legislation sees most AI-generated works as public domain, let's not be stricter than the law. Yes, we should be as cautious as possible, but that caution should not be applied this broad. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 22:32, 9 January 2024 (UTC)
- I don't understand the proposition here. Training AIs on the content here is one issue, and I can see an argument based on 'ban non-licence observant AI training from our licensed content', difficult to implement as that might be.
- However the solution here 'ban AI uploads' seems unrelated to that.
- I would not (as yet) ban AI uploads. Maybe I could be convinced otherwise. But I do that that we should immediately (or ASAP) require all AI to be clearly tagged as such, and maybe its source identified. Whatever we decide in the future is going to be made much easier by doing that early on. Andy Dingley (talk) 17:15, 10 January 2024 (UTC)
- Strong oppose I don’t even know where to begin with this. I think the fact that it’s based on a link to a single random article— not a strong legal basis, not extensive reliable sources, not even an argument from the proposer —is a good starting point. That and the fact that it’s based on an assumption that AI will always recognizably plagiarize a certain copyrighted work or works, rather than just pull from 90% of the Internet and overlap a billion similar works into a nonspecific whole. We’re putting the cart way, way before the horse here. Dronebogus (talk) 01:45, 12 January 2024 (UTC)
- Support yep. Where should future AI get reliable stuff to learn from, if Commons is full of AI work itself ???? Alexpl (talk) 10:47, 12 January 2024 (UTC)
- This is a reason for why I've been making sure that all images made using AI tools are in some subcategory of Category:AI-generated images. You can then easily exclude and maintain them.
- It's not "full of it" if we there's a few images in 100 million files of even the most mundane things photographed thousands of times. Outright banning is a knee-jerk simplistic reaction without much thought given to it, like banning images made or modified using Lightroom or Photoshop in 2003. Didn't know people here are so anti-(novel)technology and pro-indiscriminate tool-use/images exclusion. Prototyperspective (talk) 11:48, 12 January 2024 (UTC)
- Frame it a few years in the future where AI image generators are common place. Realistically how many AI generated images being put in normal categories at that point would it take for it to become unmanageable and the project to lose all creditability as a source of accurate educational material? Because it just doesn't scale past a couple of enthusiasts who are willing to manage the images as part of personal pet project. The same can't be said for photographs that people made minor edits to in photoshop or whatever. The fact is that they just don't pose the same problems and the projects reputation will never be damaged (or at it's usefulness be rendered totally useless) by people touching up old photographs in light room like it could (and probably will be) by allowing for an infinite number of fake AI generated images of historical figures or whatever. --Adamant1 (talk) 12:48, 12 January 2024 (UTC)
- They're already commonplace. That's just hypothetical speculation and still doesn't mean there aren't other better ways to deal with that. Wikimedia Commons is a repo for freely usable media files and there's lots of illustrations and artworks in it.
- For example simply don't add them to these categories, or only AI-specific subcategories. I don't see how these images could be considered "accurate educational material", especially in the categories these are showing under but that and many other images don't get outright banned or deleted (that they don't may be a good thing and there is a certain policy that often gets cited which I get the impression people assume only refers to subjects like nudity where some removals from a site are by far not as detrimental to society and free education than general-purpose tools and more socially-relevant subjects).
- The credibility is damaged by outright banning a useful general-purpose tool as well as by creating unwelcoming environments to AI developers and potential media uploaders along with undermining its reputation as being on the forefront of free speech and the creative commons – not indiscriminately censoring/excluding/however-you-call-it free media and being at the forefront of the public domain, not working against it and marginalizing new forms of art/creative-methodologies/technology. There also is the potential for infinite number of photographs of grass, trees, or tables but still we don't ban such; in fact I think there's few if any legal potentially useful media WMC outright bans. Prototyperspective (talk) 13:53, 12 January 2024 (UTC)
- Frame it a few years in the future where AI image generators are common place. Realistically how many AI generated images being put in normal categories at that point would it take for it to become unmanageable and the project to lose all creditability as a source of accurate educational material? Because it just doesn't scale past a couple of enthusiasts who are willing to manage the images as part of personal pet project. The same can't be said for photographs that people made minor edits to in photoshop or whatever. The fact is that they just don't pose the same problems and the projects reputation will never be damaged (or at it's usefulness be rendered totally useless) by people touching up old photographs in light room like it could (and probably will be) by allowing for an infinite number of fake AI generated images of historical figures or whatever. --Adamant1 (talk) 12:48, 12 January 2024 (UTC)
- We really need a “geekography test”— if pictures of naked women objectified as computer software is somehow in a million years “educational”, what isn’t? Dronebogus (talk) 14:06, 12 January 2024 (UTC)
- I don't disagree with either one of you about the nude photos, but your comparing apples and oranges because I said "accurate educational material", not "eductional material." I'm sure you both get the difference. The problem with AI artwork is that it's inherently inaccurate due to the nature of the thing. So while it's "educational" in the sense of educating people about where the technology is at, it's not eductional in regards to the subjects that the images are reported to about. That doesn't go for nude women though, obviously. No one is going to mistake an image of a nude woman with a mushroom from Mario on it for a 15th century historical figure. Let alone put it in a category for one. Although I agree the former should also be dealt with, and it could at any point. But now it's way less likely the issues presented by AI artwork will be resolved because you've poisoned the well by going off about nude photos. --Adamant1 (talk) 15:17, 14 January 2024 (UTC)
- We really need a “geekography test”— if pictures of naked women objectified as computer software is somehow in a million years “educational”, what isn’t? Dronebogus (talk) 14:06, 12 January 2024 (UTC)
- Oppose It has been common knowledge that AI generators are trained on copyrighted works for years. Pretending it's some kind of "Gotcha" moment is quite frankly ridiculous--Trade (talk) 15:40, 15 January 2024 (UTC)
Ban images generated with MidJourney[edit]
Counter proposal since the original doesn't seem to be going anywhere, but at least IMO there's still unique issues with images created by MidJourney that deserve scrutiny outside of the wider question of allowing for AI artwork on general or not.
Anyway, per Jeff G MidJourney has been shown to generate derivatives regardless of the prompt or if users asked for them. The creators of the software have also gone out of their to intentionally train the model on copyrighted meterial. Regardless of if it leads to images that violate copyright. This leds to two issues:
1. There's a less then trivial chance that whatever images are generated by MidJourney will be copyright violations and there's no easy way know which are or not due to the nature of the thing. Let alone is it something that can be easily policed at any kind of scale. Especially without any kind of guideline in place making it so ghe images can be speedy deleted or otherwise fast tracked to deletion. This issue will also only get worse and harder to deal wity if MidJourney is ever found liable in court for violating copyright. Its much harder to deal with potential copyright violations after the fact.
2. The way MidJourney is maintained in regards to the utter lack of respect for other people's intelectual property clearly goes against the goals of the project and wider movement.
Although admittedly both can be said for other AI generators they clear aren't as brazen or problematic in other cases as they are with MidJourney. So I think it warrants a separate solution. Also, in-case anyone is going to claim we don't ban software, yes we do. MP3s and MP4s being the ones that come to mind, but I'm sure there's others. And sure its for different reasons, but this still wouldn't be unique regardless.
Also an exception to the proposal will be made in cases where the image or images are being used to illustrate MidJourney itself. Although with the caveat that it shouldn't be used in a bad faithed way to game the system.
--Adamant1 (talk) 16:23, 15 January 2024 (UTC)
- Still Oppose, because a) it hasn’t been found guilty of copyright violation, b) we still need to illustrate MidJourney itself, c) you still need to prove the number of potential copyright violations goes beyond “non-trivial” into a plurality or majority. A “non-trivial” number of human uploads turn out to be copyvios, but we don’t ban humans uploading because most of them aren’t. Dronebogus (talk) 18:47, 15 January 2024 (UTC)
- @Dronebogus: I doubt it would make a difference, but I'm more then willing to modify the proposal to have an exception for images that illustrate MidJourney itself if you want me to. Really, I assumed it would be a given. Apparently not though. --Adamant1 (talk) 19:09, 15 January 2024 (UTC)
- “Ban x” usually doesn’t imply exceptions Dronebogus (talk) 19:10, 15 January 2024 (UTC)
- @Dronebogus: I doubt it would make a difference, but I'm more then willing to modify the proposal to have an exception for images that illustrate MidJourney itself if you want me to. Really, I assumed it would be a given. Apparently not though. --Adamant1 (talk) 19:09, 15 January 2024 (UTC)
- Oppose For the same reasons as before. Will this ever stop and aren't indiscriminate DRs against useful AI images that are often the only ones available for multiple notable subjects enough?
- It's a bad idea and a precedent not in line with the prior advocacy for free speech to ban image creation tools, this applies to Photoshop as much as to Midjourney. There is a less than trivial chance photographs or paintings are derivative works, movie stills, or similar – do we ban them all now too? It wouldn't be harder to deal with it wasn't banned but despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images which would go all that has been said and decided previously. Machines are allowed to learn from publicly visible media as much as humans are allowed to learn from them; these tools are a great boon to the public domain and are general-purpose tools that are and will be used for pretty much everything which is what WMC would ban while considering itself as some kind of pro public domain platform.
- MP3s are not software but media formats. If more is done, it shouldn't be a ban. The problems you think are exclusive to AI tools and which so far have not really manifested on WMC are much broader and concern all kinds of images where things like tineye bots or reports of checked categories that are most likely to receive derivative works would be useful. Prototyperspective (talk) 22:13, 15 January 2024 (UTC)
- despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images Not that I think you care since you can't seem to go one discussion related to this without claiming I'm making up things or don't know what I'm talking about, but I didn't just come up with that out of thin air. Legal experts seem to agree that MidJourney will probably be held liable for violating copyright with at least one of the many legal cases they are currently facing and/or will likely face in the future. Of course we will have to see if they are, but we have something called the "precautionary principle" for a reason. All we need is reasonable doubt to the copyright status of something, and I think that's been more then met when it comes artwork created MidJourney. We also defer to what legal experts have to say about a particular topic when deciding guidelines. Whatever helps you cope though. At least I'm proposing something that isn't just banning AI artwork outright, which was supposedly your whole problem with this to begin with. --Adamant1 (talk) 22:55, 15 January 2024 (UTC)
- One, you need to stop being so condescending towards Prototyperspective. Two, even if MidJourney are found liable for copyright infringement, there’s no need to ban their output right now. Or even at all. They’ll probably work to remedy this rather than throw their hands up and say “guess we’re done here, sorry folks”. Then only images up to that point would need to be deleted. Dronebogus (talk) 17:51, 16 January 2024 (UTC)
- First of all, Prototyperspective has a long well established history of misrepresenting my position and treating me like I'm making up things or don't know about the subject. So if anyone is being condescending they are. Secondly, it doesn't like MidJourney wants or has the ability to remedy things on their end since they intentionally trained the model on a large amount of copyrighted artwork and MidJourney creates duratives regardless of the prompt. There probably isn't really a way to "remedy" that outside of re-training it or completely starting over. Neither of which I think they are going to do. They can and have disabled certain keywords that lead to it generated copyrighted images, but it's not like we can realistically just delete images on the fly up to that point every time they patch or tweak something. --Adamant1 (talk) 18:18, 16 January 2024 (UTC)
- One, you need to stop being so condescending towards Prototyperspective. Two, even if MidJourney are found liable for copyright infringement, there’s no need to ban their output right now. Or even at all. They’ll probably work to remedy this rather than throw their hands up and say “guess we’re done here, sorry folks”. Then only images up to that point would need to be deleted. Dronebogus (talk) 17:51, 16 January 2024 (UTC)
- despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images Not that I think you care since you can't seem to go one discussion related to this without claiming I'm making up things or don't know what I'm talking about, but I didn't just come up with that out of thin air. Legal experts seem to agree that MidJourney will probably be held liable for violating copyright with at least one of the many legal cases they are currently facing and/or will likely face in the future. Of course we will have to see if they are, but we have something called the "precautionary principle" for a reason. All we need is reasonable doubt to the copyright status of something, and I think that's been more then met when it comes artwork created MidJourney. We also defer to what legal experts have to say about a particular topic when deciding guidelines. Whatever helps you cope though. At least I'm proposing something that isn't just banning AI artwork outright, which was supposedly your whole problem with this to begin with. --Adamant1 (talk) 22:55, 15 January 2024 (UTC)
- Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 16:22, 16 January 2024 (UTC)
refering to realistic in AI categories[edit]
see Category:Realistic animals by DALL-E that by refering to these files as being realistic is a falsehood which damages Commons and the wider movements reputation of realiability, accuracy, and trustworthy source. The files should not be udentified as such. Suggest that the should clear distinguish themselves as machines(AI if yuo wish) from both photographers and illustrators. Propose that all DALL-E categories are styalised as Machine Generated(AI) illustrations by DALL-E in the specific example it would become Category:Machine Generated(AI) animal illustrations by DALL-E the veiwer can decide on whether they consider it by the peacock term "realistic"Gnangarra 13:27, 9 January 2024 (UTC)
- This category is exactly so that you can move images that have inaccuracies / are unrealistic out of the category. You misunderstood the point of it. Moreover, you're confusing WMC with Wikipedia and ignore file descriptions and file titles. Also see Category:Inaccurate paleoart. The proposal regarding category renaming could be reasonable but I'd suggest that it's discussed via cat-for-discussion procedures and in a way where the title matches the contents. I've always argued (mainly in this context) that titles, categories, and file descriptions should match the actual file contents so renaming such categories may be something I'd support. Prototyperspective (talk) 13:36, 9 January 2024 (UTC)
- this not just renaming a single or associated group of categories, specifically this is about setting a policy for such styles including the removal of peacock/suggestive terminology that can mislead those searching media files. DALL-E is just the example. Gnangarra 13:44, 9 January 2024 (UTC)
- I'm also concerned about misleading search results but less so when it comes to clearly labelled AI art than lets say unexpected animations of people dying and porn images. I can't really understand why people are so worried about clearly labelled AI images showing up in search results in a relative sense. Still, I support making things clearer. Particularly one thing I suggested was having a tag note in the corner of an image or appended to file titles that e.g. says "[Made using AI]"; something similar could be done for categories but seems to already be the case in your examples (I addressed further things above). Prototyperspective (talk) 14:21, 9 January 2024 (UTC)
- this not just renaming a single or associated group of categories, specifically this is about setting a policy for such styles including the removal of peacock/suggestive terminology that can mislead those searching media files. DALL-E is just the example. Gnangarra 13:44, 9 January 2024 (UTC)
Strongly oppose "realistic" in the names of categories that users can freely add. It involves a judgement call. If we were to allow this, it should involve at least the level of rigor we bring to judging Quality Images. - Jmabel ! talk 19:41, 9 January 2024 (UTC)
- There is Category:Inaccurate paleoart; for images made involving AI tools I thought it would be best if inaccuracy is assumed and the default.
- Thought the cat would be useful and don't really care about it even though I don't understand why people such strong feelings and concerns about AI images particular but not other respectively comparable issues. Just nominate the cat for deletion. I thought having a way to separate lets say File:Parrot in Peaky Blinders style.png and File:Capybara espacial.jpg, from images aiming or achieving to be realistic depictions like File:Polygon illustration of a dog.png, File:Monkey in watercolour.png and File:Ai Generated Images Tiger.png would be useful (and possibly needed so that one assume inaccuracy if the cat is missing and easily find images that are more realistic or unrealistic). Again I'd suggest just making a CatForDeletion/Discussion post and I don't care what happens to the cat if people don't see usefulness in this distinction. Prototyperspective (talk) 21:59, 9 January 2024 (UTC)
Next and previous in series links[edit]
Let's say we are looking at number 06 in an automatically numbered series. Well there should be links to 05 and 07 on it, so we don't need to go back to an index page to see the next one.
No, I'm not saying the uploader person should remember to make the links.
I'm saying the upload creation process, where the 01 02 03 are assigned, should make the links.
And in fact they need to be made for all already existing series too...
And perhaps have all the links, 01 02 03... on all the pages, so one can jump around, not just to the next and previous.
Yes, I know one can manually edit the url in one's browser's omnibar. But that is so old fashioned.
Jidanni (talk) 13:24, 11 January 2024 (UTC)
- Perhaps something like this should be available as an option, but it should absolutely not be assumed automatically from file naming. I routinely use a number on the end to distinguish photos I took of the same subject, but it is very rare when they are intended as this sort of sequence. - Jmabel ! talk 16:32, 11 January 2024 (UTC)
- In principle, a good idea, but should not be automatic, because (like Jmabel said above) often numbers are merely used to differentiate photos, not to imply a sequence. I support this idea for a new optional tool... --P 1 9 9 ✉ 16:44, 11 January 2024 (UTC)
- When it's autonumbered, for example when uploader gives just one name for the batch, then yes the sequence links can be inserted by default as part of the upload wizard's autonumbering process. When the uploader provides separate numbers, then there's no autonumbering thus should be no automatic sequence links. Jim.henderson (talk) 06:36, 14 January 2024 (UTC)