{"id":605,"date":"2023-08-22T22:57:25","date_gmt":"2023-08-22T13:57:25","guid":{"rendered":"https:\/\/skanto.co.kr\/?p=605"},"modified":"2023-08-29T13:52:26","modified_gmt":"2023-08-29T04:52:26","slug":"generative-deep-learning","status":"publish","type":"post","link":"https:\/\/skanto.co.kr\/?p=605","title":{"rendered":"Generative Deep Learning"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">What is Generative Modeling?<\/h2>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Generative modeling is a branch of machine learning that involves training a model to produce new data that is similar to a given dataset<\/p>\n<\/blockquote>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"261\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-2.png\" alt=\"\" class=\"wp-image-609\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-2.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-2-300x131.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 1-1<\/figcaption><\/figure>\n\n\n\n<p>We can sample from this model to create novel, realistic images of horses that did not exist in the original dataset.<\/p>\n\n\n\n<p>One data point in the training data is called as <em>observation<\/em>. Each observation consists of many <em>features<\/em>.<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">A generative model must be <em>probabilistic<\/em> rather than deterministic<\/span>, because we want to be able to sample many different variations of the output, rather than get the same output every time.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Generative Versus Descriminative Modeling<\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"261\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-1.png\" alt=\"\" class=\"wp-image-608\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-1.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-1-300x131.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 1-2<\/figcaption><\/figure>\n\n\n\n<p>Figure 1-2 shows the discriminative modeling process.<\/p>\n\n\n\n<p>Generative modeling doesn&#8217;t require the dataset to be labeled because it concerns itself with generating entirely new images, rather than trying to predict a label of a given image.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<h4 class=\"wp-block-heading\">Conditional Generative Models<\/h4>\n\n\n\n<p>For example, if out dataset contains different types of fruit, we could tell our generative model to specifically generate an image of an apple.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">The Rise of Generative Modeling<\/h3>\n\n\n\n<p>Until recently, discriminative modeling has been the driving force behind most progress in machine learning. This is <em>because the corresponding generative modeling problem is typically much more difficult to tackle<\/em>.<\/p>\n\n\n\n<p>However, as machine learning technologies have matured, this assumption has gradually weakened.<\/p>\n\n\n\n<p><span style=\"text-decoration: underline;\">In the last 10 years many of the most interesting advancements in the field have come through novel applications of machine learning to generative modeling tasks.<\/span><\/p>\n\n\n\n<p>Generative services that target specific business problems<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>generate original blog posts given a particular subject matter<\/li>\n\n\n\n<li>produce a variety of images of your product in any setting<\/li>\n\n\n\n<li>write social media content and ad copy to match your brand<\/li>\n\n\n\n<li>game design and cinematography to output video and music<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Generative Modeling and AI<\/h3>\n\n\n\n<p>There are <em><span style=\"text-decoration: underline;\">three deeper reasons<\/span><\/em> why generative modeling can be considered the key to unlocking a far more sophisticated form of artificial intelligence that goes beyond what discriminative modeling alone can achieve.<\/p>\n\n\n\n<p><strong>Firstly<\/strong>, we shouldn&#8217;t limit our machine training to simply categorizing data and should also be concerned with training models that capture a more complete understanding of the data distribution. Many of the same techniques that have driven development in discriminative modeling, such as deep learning, can be utilized by generative models too.<\/p>\n\n\n\n<p><strong>Secondly<\/strong>, generative modeling is now being used to drive progress in other fields of AI, such as reinforcement learning. A traditional approach(to train a robot to walk across a terrain) is fairly inflexible because it is trained to optimize the policy for one particular task. An alternative approach that has recently gained traction is to train the agent to learn a <em><span style=\"text-decoration: underline;\">world model<\/span><\/em> (not the real environment)of the environment using a generative model, independent of any particular task.<\/p>\n\n\n\n<p><strong>Finally<\/strong>, if we are to say that we have built a machine that has acquired a form of intelligence, generative modeling must surely be part of the solution. Current neuroscientific theory suggests that our perception of reality is not a highly complex discriminative model operating on our sensory input <em><span style=\"text-decoration: underline;\">to produce <strong>predictions<\/strong> of what we are experiencing<\/span><\/em>, but is instead a generative model that is trained from birth <em><span style=\"text-decoration: underline;\">to produce <strong>simulations<\/strong> of our surroundings<\/span><\/em> that accurately match the future.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The Generative Modeling Framework<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li>We have a dataset of observation X.<\/li>\n\n\n\n<li>We assume that the observations have been generated according to some unknown distribution, <em>P<sub>data<\/sub><\/em><\/li>\n\n\n\n<li>We want to build generative model <em>P<sub>model<\/sub><\/em> that minics <em>P<sub>data<\/sub><\/em>. If we achieve this goal, we can sample from <em>P<sub>model<\/sub><\/em> to generate observations that appear to have been drawn from <em>P<sub>data<\/sub><\/em>.<\/li>\n\n\n\n<li>Therefore, the desirable properties of P<sub>model<\/sub> are: \n<ul class=\"wp-block-list\">\n<li><em>Accuracy<\/em><br>If <em>P<sub>model<\/sub><\/em> is high for a generated observation, it should look like it has been drawn from <em>P<sub>data<\/sub><\/em>. if <em>P<sub>model<\/sub><\/em> is low for a generated observation, it should not look like it has been drawn from <em>P<sub>data<\/sub><\/em>.<\/li>\n\n\n\n<li><em>Generation<\/em><br>It should be possible to easily sample a new observation from <em>P<sub>model<\/sub><\/em>.<\/li>\n\n\n\n<li><em>Representation<\/em><br>It should be possible to understand how different high-level features in the data are represented by <em>P<sub>model<\/sub><\/em>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Representation Learning<\/h3>\n\n\n\n<p>Instead of trying to model the high-dimensional sample space directly, we describe each observation in the training set using some lower dimensional <em><strong>latent space<\/strong><\/em> and then learn a mapping function that can take a  point in the latent space and map it to a point in the original domain. In other words, each point in the latent space is a <em><span style=\"text-decoration: underline;\">representation<\/span><\/em> of some high-dimensional observation.<\/p>\n\n\n\n<p>One of the benefits of training models that utilize a latent space is that we can perform operations that affect high-level properties of the image by <em>manipulating its representation vector within the more manageable latent space<\/em>.<\/p>\n\n\n\n<p>The concept of encoding the training dataset into a latent space so that we can sample from it and decode the point back to the original domain is common to many generative modeling technique.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"618\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-9.png\" alt=\"\" class=\"wp-image-643\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-9.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-9-291x300.png 291w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-9-262x270.png 262w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 1-9 The dog manifold in high-dimensional pixel space is mapped to a simpler latent space that can be sampled from<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Core Probability Theory<\/h2>\n\n\n\n<p>Five key terms<\/p>\n\n\n\n<p><em>Sample space<\/em><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The sample space is the complete set of all values an observation x can take<\/li>\n<\/ul>\n\n\n\n<p><em>Probability density function<\/em> (or simply <em>density function<\/em>)<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>a function <em>p(x)<\/em> that maps a point x in the sample space to a number between 0, and 1. The integral of the density function over all points in the sample space must equal 1, so that it is a well-defined probability distribution.<\/li>\n<\/ul>\n\n\n\n<p><em>Parametric modeling<\/em><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>a technique that we can use to structure our approach to finding a suitable <em>P<sub>model<\/sub>(x)<\/em><\/li>\n<\/ul>\n\n\n\n<p><em>Likelihood<\/em><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The <em>likelihood \u2112(\u03b8|\ud835\udc31)<\/em> of a parameter set \u03b8 is a function that measures the plausibility of \u03b8 , given some observed point \ud835\udc31<\/li>\n<\/ul>\n\n\n\n<p><em>Maximum likelihood estimation<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Generative Model Taxonomy<\/h2>\n\n\n\n<p>While all types of generative models ultimately aim to solve the same task, they all take slightly different approaches to modeling the density function. There are three possible approaches:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Explicitly model the density function, but constrain the model in some way, so that the density function is tractable(i.e., it can be calculated).<\/li>\n\n\n\n<li>Explicitly model a tractable approximation of the density function.<\/li>\n\n\n\n<li>Implicitly model the density function, through a stochastic process that directly generates data.<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image size-full is-style-default\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"354\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-10.png\" alt=\"\" class=\"wp-image-657\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-10.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-10-300x177.png 300w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_1-10-458x270.png 458w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 1-10 A taxonomy of generative modeling approaches<\/figcaption><\/figure>\n\n\n\n<p>The best-known example of an implicit generative model is a generative adversarial network.<\/p>\n\n\n\n<p>Approximate density models include variational autoencoders.<\/p>\n\n\n\n<p>A common thread that runs through all of the generative model family types is <em>deep learning<\/em><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Deep Learning<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">Deep Neural Networks<\/h2>\n\n\n\n<p>The majority of deep learning systems are artificial neural networks(ANNs, or just neural networks for short) with multiple stacked hidden layers.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What is a Neural Network?<\/h3>\n\n\n\n<p>A neural network consists of a series of stacked <em>layers<\/em>. Each layer contains <em>units<\/em> that are connected to the previous layer&#8217;s units through a set of <em>weights<\/em>.<\/p>\n\n\n\n<p>One of the most common layers is the <em>fully connected<\/em> (or dense) layer that connects all units in the layer directly to every unit in the previous layer.<\/p>\n\n\n\n<p>Neural networks where all adjacent layers are fully connected are called <em><strong>multilayer perceptrons<\/strong><\/em>(MLPs).<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"330\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-2.png\" alt=\"\" class=\"wp-image-664\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-2.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-2-300x165.png 300w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-2-491x270.png 491w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 2-2 An example of a multilayer perceptron that predicts if a face is smiling<\/figcaption><\/figure>\n\n\n\n<p>Let&#8217;s walk through the network shown in figure 2-2<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Unit A receives the value for an individual channel of an input pixel<\/li>\n\n\n\n<li>Unit B combines its input values so that it fires strongest when a particular low-level feature such as an edge  is present.<\/li>\n\n\n\n<li>Unit C combines the low-level features so that it fires strongest when a higher-level feature such as teeth are seen in the image.<\/li>\n\n\n\n<li>Unit D combines the high-level features to that it fires strongest when the person in the original image is smiling.<\/li>\n<\/ol>\n\n\n\n<p>The layers between the input and output layers are called <em>hidden layers<\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Multilayer Perceptron(MLP)<\/h2>\n\n\n\n<p>The MLP is a discriminative (rather than generative) model, but supervised learning will still play a role in many types of generative models.<\/p>\n\n\n\n<p><em><strong>Preparing the Data -&gt; Building the Model(Layers, Activation functions) -&gt; Compiling the Model(Loss functions, Optimizers) -&gt; Training the Model -&gt; Evaluating the Model<\/strong><\/em><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<ul class=\"wp-block-list\">\n<li>There are many kinds of activation function, but three of the most important are ReLU(rectified linear unit), sigmoid, and softmax.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Three of the most commonly used loss functions are mean square error, categorical cross-entropy, and binary cross entropy.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>One of the most commonly used and stable optimizer is Adam(Adaptive Moment Estimation).<\/li>\n<\/ul>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Convolutional Neural Network(CNN)<\/h2>\n\n\n\n<p>To make the network perform well, <em>the spatial structure of the input images<\/em> needs to be taken into account.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Convolutional Layers<\/h3>\n\n\n\n<p>The convolution is performed by multiplying the filter pixel-wise with the portion of the image, and summing the results. <span style=\"text-decoration: underline;\"><em>The output is more positive when the portion of the image closely matches the filter and more negative when the portion of the image is the inverse of the filter<\/em><\/span>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"468\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-10.png\" alt=\"\" class=\"wp-image-678\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-10.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-10-300x234.png 300w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-10-346x270.png 346w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 2-10 A3x3 convolutional filter applied to two portions of a grayscale image<\/figcaption><\/figure>\n\n\n\n<p>If we move the filter across the entire image from left to right and top to bottom, recording the convolutional output as we go, we obtain a new array that picks out a particular feature of the input, depending on the values in the filter.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"350\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-11.png\" alt=\"\" class=\"wp-image-680\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-11.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-11-300x175.png 300w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-11-463x270.png 463w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 2-11 Two convolutional filters applied to a grayscale image<\/figcaption><\/figure>\n\n\n\n<p>A convolutional layer is simply a collection of filters, where the values stored in the filters are the weights that are learned by the neural network through training. Initially these are random, but gradually the filters adapt their weights to start picking out interesting features such as edges or particular color combinations.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"443\" height=\"115\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-12.png\" alt=\"\" class=\"wp-image-682\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-12.png 443w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-12-300x78.png 300w\" sizes=\"auto, (max-width: 443px) 100vw, 443px\" \/><figcaption class=\"wp-element-caption\">figure 2-12 A 3 x 3 x 1 kernel (gray) being passed over a 5 x 5 x 1 input image (blue), with <code>padding = \"same\"<\/code> and <code>strides = 1<\/code>, to generate the 5 x 5 x 1 output (green)<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Stride<\/h3>\n\n\n\n<p>The strides parameter is the stop size used by the layer to move the filters across the input. Increasing the stride therefore reduces the size of the output tensor.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Padding<\/h3>\n\n\n\n<p>The <code>padding = \"same\"<\/code> input parameter pads the input data with zeros so that the output size from the layer is exactly the same as the input size when <code>strides = 1<\/code>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Stacking convolutional layers<\/h3>\n\n\n\n<p>The output of a <code>Conv2D<\/code> layer is another four-dimensional tensor, now of shape <code>(batch_size, height, width, filters)<\/code>, so we can stack <code>Conv2D<\/code> layers on top of each other to grow the depth of our neural network and make it more powerful.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from tensorflow.keras import layers, models\n\ninput_layer = layers.Input(shape=(32,32,3))\nconv_layer_1 = layers.Conv2D(\n    filters = 10\n    , kernel_size = (4,4)\n    , strides = 2\n    , padding = 'same'\n    )(input_layer)\nconv_layer_2 = layers.Conv2D(\n    filters = 20\n    , kernel_size = (3,3)\n    , strides = 2\n    , padding = 'same'\n    )(conv_layer_1)\nflatten_layer = layers.Flatten()(conv_layer_2)\noutput_layer = layers.Dense(units=10, activation = 'softmax')(flatten_layer)\nmodel = models.Model(input_layer, output_layer)<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"328\" src=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-13.png\" alt=\"\" class=\"wp-image-686\" srcset=\"https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-13.png 600w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-13-300x164.png 300w, https:\/\/skanto.co.kr\/wp-content\/uploads\/2023\/08\/figure_2-13-494x270.png 494w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><figcaption class=\"wp-element-caption\">figure 2-13 A diagram of a convolutional neural network<\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>What is Generative Modeling? Generative modeling is a branch of machine learning that involves training a model to produce new data that is similar to a given dataset We can sample from this model to create novel, realistic images of horses that did not exist in the original dataset. One data point in the training data is called as observation. Each observation consists of many features. A generative model must be probabilistic rather than deterministic, because we want to be&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/skanto.co.kr\/?p=605\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[14,7],"tags":[48,76,75],"class_list":["post-605","post","type-post","status-publish","format-standard","hentry","category-sw-development","category-7","tag-ai","tag-deep-learning","tag-generative"],"_links":{"self":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/605","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=605"}],"version-history":[{"count":68,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/605\/revisions"}],"predecessor-version":[{"id":690,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/605\/revisions\/690"}],"wp:attachment":[{"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=605"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=605"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/skanto.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=605"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}