跳至內容

File:Demonstration of inpainting and outpainting using Stable Diffusion (step 3 of 4).png

頁面內容不支援其他語言。
這個檔案來自維基共享資源
維基百科,自由的百科全書

原始檔案 (2,048 × 3,584 像素,檔案大小:4.43 MB,MIME 類型:image/png


摘要

描述

Demonstration of the usage of inpainting and outpainting techniques on algorithmically-generated artworks created using the Stable Diffusion V1-4 AI diffusion model. Not only is Stable Diffusion capable of generating new images from scratch via text prompt, it is also capable of providing guided image synthesis for enhancing existing images, through the use of the model's diffusion-denoising mechanism.

This image aims to illustrate the process in which Stable Diffusion can be used to perform both inpainting and outpainting, as one part out of four images showing each step of the procedure.

Procedure/Methodology

All artworks created using a single NVIDIA RTX 3090. Front-end used for the entire generation process is Stable Diffusion web UI created by AUTOMATIC1111.

First image: Generation via text prompt

An initial 512x768 image was algorithmically-generated with Stable Diffusion via txt2img using the following prompts:

Prompt: busty young girl, art style of artgerm and greg rutkowski

Negative prompt: (((deformed))), [blurry], bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing, two heads, four breasts

Settings: Steps: 50, Sampler: Euler a, CFG scale: 7, Seed: 4027103558, Size: 512x768

Then, two passes of the SD upscale script using "Real-ESRGAN 4x plus anime 6B" were run within img2img. The first pass used a tile overlap of 64, denoising strength of 0.3, 50 sampling steps with Euler a, and a CFG scale of 7. The second pass used a tile overlap of 128, denoising strength of 0.1, 10 sampling steps with Euler a, and a CFG scale of 7. This creates our initial 2048x3072 image to begin working with. Unfortunately for her (and fortunately for the purpose of this demonstration), it appears that the AI neglected to give this woman one of her arms.

Second image: Outpainting

Using the "Outpainting mk2" script within img2img, the bottom of the image was extended by 512 pixels (via two passes, each pass extending 256 pixels), using 100 sampling steps with Euler a, denoising strength of 0.8, CFG scale of 7.5, mask blur of 4, fall-off exponent value of 1.8, colour variation set to 0.03. The prompts used were identical to those utilised during the first step. This subsequently increases the image's dimensions to 2048x3584, while also revealing the woman's midriff, belly button and skirt, which were previously absent from the original AI-generated image.

Third image: Preparation for inpainting

In GIMP, I drew a very shoddy attempt at a human arm using the standard paintbrush. This will provide a guide for the AI model to generate a new arm.

Final image: Inpainting

Using the inpaint feature for img2img, I drew a mask over the arm drawn in the previous step, along with a portion of the shoulder. The following settings were used for all passes:

  • Inpaint masked
  • Masked content: original
  • Inpaint at full resolution, padding at 256 pixels
  • Steps: 80, Sampler: Euler a

An initial pass was run using the following prompts:

Prompt: perfect arm, young woman's arm, (((anterior elbow))), (((inside of elbow))), bent arm, slender arm, realistic arm, wrinkled short sleeve of white blouse, woman's shoulder, brown hair on top of sleeve, (((pale skin))), skin on arm, smooth skin, art style of artgerm and greg rutkowski

Negative prompt: (((torn blouse))), (((torn sleeve))), (((deformed))), [blurry], bad anatomy, disfigured, multiple arms, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing

Settings: CFG scale: 17, Denoising strength: 0.6, Seed: 525737653

This created the arm; another subsequent pass was then done to fine-tune deformations and blemishes around the newly generated arm along the sleeve. Drawing a new mask over the shoulder, the following prompt was used:

Prompt: brown hair on top of sleeve and arm, wrinkled short sleeve of white blouse, young woman's upper arm beside her chest, woman's shoulder, skin under sleeve, art style of artgerm and greg rutkowski

Negative prompt: (((deformed))), [blurry], bad anatomy, disfigured, multiple arms, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing

Settings: CFG scale: 7, Denoising strength: 0.4, Seed: 653575127

The outcome of this pass resulted in the final image.

日期
來源 自己的作品
作者 Benlisquare
授權許可
(重用此檔案)
Output images

As the creator of the output images, I release this image under the licence displayed within the template below.

Stable Diffusion AI model

The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.

Addendum on datasets used to teach AI neural networks
Artworks generated by Stable Diffusion are algorithmically created based on the AI diffusion model's neural network as a result of learning from various datasets; the algorithm does not use preexisting images from the dataset to create the new image. Ergo, generated artworks cannot be considered derivative works of components from within the original dataset, nor can any coincidental resemblance to any particular artist's drawing style fall foul of de minimis. While an artist can claim copyright over individual works, they cannot claim copyright over mere resemblance over an artistic drawing or painting style. In simpler terms, Vincent van Gogh can claim copyright to The Starry Night, however he cannot claim copyright to a picture of a T-34 tank painted with similar brushstroke styles as Gogh's The Starry Night created by someone else.

授權條款

我,本作品的著作權持有者,決定用以下授權條款發佈本作品:
w:zh:共享創意
姓名標示 相同方式分享
您可以自由:
  • 分享 – 複製、發佈和傳播本作品
  • 重新修改 – 創作演繹作品
惟需遵照下列條件:
  • 姓名標示 – 您必須指名出正確的製作者,和提供授權條款的連結,以及表示是否有對內容上做出變更。您可以用任何合理的方式來行動,但不得以任何方式表明授權條款是對您許可或是由您所使用。
  • 相同方式分享 – 如果您利用本素材進行再混合、轉換或創作,您必須基於如同原先的相同或兼容的條款,來分布您的貢獻成品。
GNU head 已授權您依據自由軟體基金會發行的無固定段落、封面文字和封底文字GNU自由文件授權條款1.2版或任意後續版本,對本檔進行複製、傳播和/或修改。該協議的副本列在GNU自由文件授權條款中。
您可以選擇您需要的授權條款。

說明

添加單行說明來描述出檔案所代表的內容

在此檔案描寫的項目

描繪內容

乳溝 Chinese (Hong Kong) (已轉換拼寫)

解理 Chinese (Hong Kong) (已轉換拼寫)

創作作者 Chinese (Hong Kong) (已轉換拼寫)

沒有維基數據項目的某些值

作者姓名字串 繁體中文 (已轉換拼寫):​Benlisquare
維基媒體使用者名稱 繁體中文 (已轉換拼寫):​Benlisquare

著作權狀態 繁體中文 (已轉換拼寫)

有著作權 繁體中文 (已轉換拼寫)

GNU自由文檔許可證1.2或更高版本 繁體中文 (已轉換拼寫)

共享創意署名-相同方式共享4.0國際 Chinese (Hong Kong) (已轉換拼寫)

多媒體型式 繁體中文 (已轉換拼寫)

image/png

檔案來源 Chinese (Taiwan) (已轉換拼寫)

上傳者的原創作品 繁體中文 (已轉換拼寫)

人工智能生成圖像 中文 (已轉換拼寫)

類型 Chinese (Hong Kong) (已轉換拼寫)

人工智能藝術 中文 (已轉換拼寫)

檔案歷史

點選日期/時間以檢視該時間的檔案版本。

日期/時間縮⁠圖尺寸用戶備⁠註
目前2022年9月28日 (三) 20:01於 2022年9月28日 (三) 20:01 版本的縮圖2,048 × 3,584(4.43 MB)BenlisquareI didn't like how dark the arm turned out, so I re-did it again. Redrew the makeshift arm in a lighter shade in GIMP, then ran two passes of inpainting: Steps: 80, Sampler: Euler a, CFG scale: 17, Seed: 525737653, Denoising strength: 0.6 to generate arm; Steps: 80, Sampler: Euler a, CFG scale: 7, Seed: 653575127, Denoising strength: 0.4 for cleanup.
2022年9月27日 (二) 14:23於 2022年9月27日 (二) 14:23 版本的縮圖2,048 × 3,584(4.42 MB)Benlisquare{{Information |Description=Demonstration of the usage of inpainting and outpainting techniques on algorithmically-generated artworks created using the [https://github.com/CompVis/stable-diffusion Stable Diffusion V1-4] AI diffusion model. Not only is Stable Diffusion capable of generating new images from scratch via text prompt, it is also capable of providing guided image synthesis for enhancing existing images, through the use of the model's diffusion-denoising mechanism. This image aims t...

下列頁面有用到此檔案:

全域檔案使用狀況

以下其他 wiki 使用了這個檔案:

詮釋資料