File:X-Y plot of algorithmically-generated AI art demonstrating Hypernetworks.png

頁面內容不支援其他語言。
這個檔案來自維基共享資源
維基百科,自由的百科全書

原始檔案(5,952 × 3,197 像素,檔案大小:22.01 MB,MIME 類型:image/png


摘要

描述

An X/Y plot of algorithmically-generated AI artworks depicting a woman in various different settings, created using a custom-trained anime-focused Stable Diffusion-based model known as "Anything V3.0" (with hash 1a7df6b8) created by Furqanil Taqwa. This plot serves to demonstrate the usage of Hypernetworks, a technique created by Kurumuz in 2021 which allows Stable Diffusion-based image generation models to imitate the art style of specific artists, even if the artist is not recognised by the original diffusion model, by applying a small neural network at various points within the larger network.

Hypernetworks are small pre-trained neural networks that steer results towards a particular direction, for example applying visual styles and motifs, when used in conjunction with a larger neural network. The Hypernetwork processes the image by finding key areas of importance such as hair and eyes, and patches them in secondary latent space. They are significantly smaller in filesize compared to DreamBooth models, another method for fine-training AI diffusion models, making Hypernetworks a viable alternative to DreamBooth models in some, but not all, use-cases. Hypernetwork training also requires only 6GB of VRAM, compared to the ~20GB VRAM required for DreamBooth training (although this VRAM requirement can be lowered using DeepSpeed). A downside to Hypernetworks is that they are comparatively less flexible and accurate, and can sometimes lead to unpredictable results. For this reason, Hypernetworks are suited towards applying visual style or cleaning up blemishes in human anatomy, while DreamBooth models are more adept at depicting specific user-defined subjects.

Procedure/Methodology

These images were generated using an NVIDIA RTX 4090; since Ada Lovelace chipsets (using compute capability 8.9, which requires CUDA 11.8) are not fully supported by the pyTorch dependency libraries currently used by Stable Diffusion, I've used a custom build of xformers, along with pyTorch cu116 and cuDNN v8.6, as a temporary workaround. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.

Hypernetworks trained on the artstyles of the following artists were used:

  • As109, a Chinese artist. Trained on 440 samples using 75,000 steps on 0.0000005 LR.
  • Asanagi (朝凪), a Japanese artist and the sole member of the Fatalpulse dōjin circle. Trained using 118,500 steps.
  • homunculus (ホムンクルス), a Japanese artist and mangaka. Trained using 90,000 steps on 5e-7 LR with no normalisation, and layer structure 1.0, 2.0, 1.0.
  • j.k., a Canadian artist.
  • Ohisashiburi (お久しぶり), a Japanese artist. Trained with 1e-5 LR up to 7,000 steps, and 5e-6 LR up to 180,000 steps, with layer structure 1.0, 1.5, 1.5, 1.0, mish activation function, normal weight initialisation, layer norm set to false, dropout usage set to true.
  • Takayaki (たかやKi), a Japanese artist and member of the Jenoa Cake (じぇのばけーき) dōjin circle. Trained on 90 samples using 100,000 steps on 0.0000005 LR.

A batch of 768x1024 images were generated with txt2img using the following prompts:

young woman, fully clothed, volumetric lighting, mountain forest background

Negative prompt: nude, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name

Settings: Steps: 100, Sampler: DPM2, CFG scale: 7, Size: 768x1024, Highres. fix, Denoising strength: 0.7, Clip skip: 2

During the generation of this batch, an X/Y plot was generated using the "X/Y plot" txt2img script, along with the following settings:

  • X-axis: Hypernetwork: None, 151abd09, a00f10d3, e0a6b144, 97fa462a, a177a153, 9936f48b
  • Y-axis: Prompt S/R: mountain forest background, sitting in front of a computer. corporate office background, romantic date. french cafe background
This script repeats the same prompt and seed value for each hypernetwork, and also searches for the first value (in this case "mountain forest background") within the prompt, replacing the string with the subsequent comma-separated values.
日期
來源 自己的作品
作者 Benlisquare
授權許可
(重用此檔案)
Output images

As the creator of the output images, I release this image under the licence displayed within the template below.

Stable Diffusion AI model

The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.

Anything V3.0 model

Anything V3.0, created by Furqanil Taqwa, is released under the CreativeML OpenRAIL-M License.

Addendum on datasets used to teach AI neural networks
Artworks generated by Stable Diffusion are algorithmically created based on the AI diffusion model's neural network as a result of learning from various datasets; the algorithm does not use preexisting images from the dataset to create the new image. Ergo, generated artworks cannot be considered derivative works of components from within the original dataset, nor can any resemblance to any particular artist's drawing style fall foul of de minimis. While an artist can claim copyright over individual works, they cannot claim copyright over mere resemblance over an artistic drawing or painting style. In simpler terms, Vincent van Gogh can claim copyright to The Starry Night, however he cannot claim copyright to a picture of a T-34 tank painted with similar brushstroke styles as Gogh's The Starry Night created by someone else.

授權條款

我,本作品的著作權持有者,決定用以下授權條款發佈本作品:
w:zh:創用CC
姓名標示 相同方式分享
您可以自由:
  • 分享 – 複製、發佈和傳播本作品
  • 重新修改 – 創作演繹作品
惟需遵照下列條件:
  • 姓名標示 – 您必須指名出正確的製作者,和提供授權條款的連結,以及表示是否有對內容上做出變更。您可以用任何合理的方式來行動,但不得以任何方式表明授權條款是對您許可或是由您所使用。
  • 相同方式分享 – 如果您利用本素材進行再混合、轉換或創作,您必須基於如同原先的相同或兼容的條款,來分布您的貢獻成品。
GNU head 已授權您依據自由軟體基金會發行的無固定段落、封面文字和封底文字GNU自由文件授權條款1.2版或任意後續版本,對本檔進行複製、傳播和/或修改。該協議的副本列在GNU自由文件授權條款中。
您可以選擇您需要的授權條款。

說明

添加單行說明來描述出檔案所代表的內容

在此檔案描寫的項目

描繪內容

人工智慧生成圖像 中文 (已轉換拼寫)

人工智慧藝術 中文 (已轉換拼寫)

檔案歷史

點選日期/時間以檢視該時間的檔案版本。

日期/時間縮⁠圖尺寸使用者備⁠註
目前2022年12月4日 (日) 17:05於 2022年12月4日 (日) 17:05 版本的縮圖5,952 × 3,197(22.01 MB)Benlisquareinpaint ugly hands: done
2022年12月4日 (日) 14:13於 2022年12月4日 (日) 14:13 版本的縮圖5,952 × 3,197(21.89 MB)Benlisquareinpaint ugly hands WIP
2022年12月4日 (日) 01:19於 2022年12月4日 (日) 01:19 版本的縮圖5,952 × 3,197(21.82 MB)Benlisquareinpaint ugly hands WIP (this process takes hours, will finish later)
2022年12月3日 (六) 22:20於 2022年12月3日 (六) 22:20 版本的縮圖5,952 × 3,197(21.82 MB)Benlisquare{{Information |Description= An X/Y plot of algorithmically-generated AI artworks depicting a woman in various different settings, created using a custom-trained anime-focused Stable Diffusion-based model known as "[https://huggingface.co/Linaqruf/anything-v3.0 Anything V3.0]" (with hash 1a7df6b8) created by [https://huggingface.co/Linaqruf Furqanil Taqwa]. This plot serves to demonstrate the usage of Hypernetworks, a [https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db8...

下列2個頁面有用到此檔案:

詮釋資料