{"id":169,"date":"2023-03-26T18:22:47","date_gmt":"2023-03-26T18:22:47","guid":{"rendered":"https:\/\/blog.amalgamcs.com\/?p=169"},"modified":"2023-03-26T18:41:17","modified_gmt":"2023-03-26T18:41:17","slug":"keras-your-gateway-to-neural-networks","status":"publish","type":"post","link":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/","title":{"rendered":"Keras: Your Gateway to Neural Networks"},"content":{"rendered":"\n<p>Keras is a high-level open-source neural network library written in Python. It is designed to be user-friendly, modular, and extensible, and can run on top of other popular machine learning frameworks such as TensorFlow, Theano, and CNTK.<\/p>\n\n\n\n<p>Keras provides a simple and intuitive API that allows users to quickly build and prototype deep learning models with just a few lines of code. It supports a wide range of neural network architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and multi-layer perceptrons (MLPs), and also includes many pre-trained models for common tasks such as image recognition and natural language processing.<\/p>\n\n\n\n<p>Keras is widely used in industry and academia for a variety of applications, including computer vision, speech recognition, and natural language processing. Its popularity is due to its ease of use and flexibility, which makes it accessible to beginners while still providing advanced features for experienced users.<\/p>\n\n\n\n<h2>Code Examples<\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h3>1. Recurrent Neural Network (RNN)<\/h3>\n\n\n\n<p>Recurrent Neural Networks (RNNs) are a type of neural network used for sequential data tasks such as natural language processing and time-series analysis. Here&#8217;s an example of building a simple RNN using Keras:<\/p>\n\n\n\n<pre class=\"wp-block-code has-green-color has-black-background-color has-text-color has-background\"><code>from keras.models import Sequential\r\nfrom keras.layers import SimpleRNN, Dense\r\n\r\n# Define the RNN architecture\r\nmodel = Sequential()\r\nmodel.add(SimpleRNN(32, input_shape=(None, 100)))\r\nmodel.add(Dense(1, activation='sigmoid'))\r\n\r\n# Compile the model\r\nmodel.compile(optimizer='adam',\r\n              loss='binary_crossentropy',\r\n              metrics=&#91;'accuracy'])\r<\/code><\/pre>\n\n\n\n<p>In this example, we&#8217;re using the Sequential API to build an RNN with a single SimpleRNN layer with 32 units. The input_shape parameter is set to (None, 100) to indicate that the input sequence can be of variable length, but each element in the sequence has 100 features. We then add a fully connected layer with a single unit and a sigmoid activation function, as we&#8217;re performing binary classification. We&#8217;re also using the Adam optimizer and binary cross-entropy loss function.<\/p>\n\n\n\n<h3>2. Dropout<\/h3>\n\n\n\n<p>Dropout is a technique used to prevent overfitting in neural networks. Here&#8217;s an example of using dropout with Keras:<\/p>\n\n\n\n<pre class=\"wp-block-code has-green-color has-black-background-color has-text-color has-background\"><code>from keras.models import Sequential\r\nfrom keras.layers import Dense, Dropout\r\nfrom sklearn.datasets import make_classification\r\nfrom sklearn.model_selection import train_test_split\r\n\r\n# Generate some random data for binary classification\r\nX, y = make_classification(n_samples=10000, n_features=20, n_informative=10,\r\n                           n_redundant=0, n_classes=2, random_state=42)\r\n\r\n# Split the data into training and validation sets\r\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)\r\n\r\n# Define the model architecture with dropout\r\nmodel = Sequential()\r\nmodel.add(Dense(64, activation='relu', input_shape=(20,)))\r\nmodel.add(Dropout(0.5))\r\nmodel.add(Dense(1, activation='sigmoid'))\r\n\r\n# Compile the model\r\nmodel.compile(optimizer='adam',\r\n              loss='binary_crossentropy',\r\n              metrics=&#91;'accuracy'])\r\n\r\n# Train the model with early stopping\r\nhistory = model.fit(X_train, y_train, epochs=50, batch_size=128,\r\n                    validation_data=(X_val, y_val))\r<\/code><\/pre>\n\n\n\n<p>In this example, we&#8217;re using the Sequential API to build a neural network with a single hidden layer with 64 units and a ReLU activation function. We then add a dropout layer with a dropout rate of 0.5. Finally, we add an output layer with a single unit and a sigmoid activation function, as we&#8217;re performing binary classification. We&#8217;re also using the Adam optimizer and binary cross-entropy loss function.<\/p>\n\n\n\n<h3>3. Batch Normalization<\/h3>\n\n\n\n<p>Batch normalization is a technique used to improve the training stability and speed of neural networks. Here&#8217;s an example of using batch normalization with Keras:<\/p>\n\n\n\n<pre class=\"wp-block-code has-green-color has-black-background-color has-text-color has-background\"><code>from keras.models import Sequential\r\nfrom keras.layers import Dense, BatchNormalization\r\nfrom sklearn.datasets import make_classification\r\nfrom sklearn.model_selection import train_test_split\r\n\r\n# Generate some random data for binary classification\r\nX, y = make_classification(n_samples=10000, n_features=20, n_informative=10,\r\n                           n_redundant=0, n_classes=2, random_state=42)\r\n\r\n# Split the data into training and validation sets\r\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)\r\n\r\n# Define the model architecture with batch normalization\r\nmodel = Sequential()\r\nmodel.add(Dense(64, activation='relu', input_shape=(20,)))\r\nmodel.add(BatchNormalization())\r\nmodel.add(Dense(1, activation='sigmoid'))\r\n\r\n# Compile the model\r\nmodel.compile(optimizer='adam',\r\n              loss='binary_crossentropy',\r\n              metrics=&#91;'accuracy'])\r\n\r\n# Train the model with early stopping\r\nhistory = model.fit(X_train, y_train, epochs=50, batch_size=128,\r\n                    validation_data=(X_val, y_val))\r<\/code><\/pre>\n\n\n\n<p>In this example, we&#8217;re using the Sequential API to build a neural network with a single hidden layer with 64 units and a ReLU activation function. We then add a Batch Normalization layer after the hidden layer, and finally add an output layer with a single unit and a sigmoid activation function, as we&#8217;re performing binary classification.<\/p>\n\n\n\n<p>Batch Normalization is a technique used to improve the training stability and speed of neural networks. It works by normalizing the input to each layer, so that it has a mean of 0 and standard deviation of 1. This has been shown to improve the convergence of the model during training, and can lead to better generalization and reduced overfitting.<\/p>\n\n\n\n<p>In the example above, we&#8217;ve added a Batch Normalization layer after the hidden layer. The Batch Normalization layer takes the output of the previous layer and normalizes it before passing it on to the next layer. We then compile and train the model using the same techniques as in the other examples.<\/p>\n\n\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/amalgamcs.com\/\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"378\" src=\"http:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/Original-Logo-1024x378.png\" alt=\"AmalgamCS Logo\" class=\"wp-image-76\" srcset=\"https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/Original-Logo-1024x378.png 1024w, https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/Original-Logo-300x111.png 300w, https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/Original-Logo-768x284.png 768w, https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/Original-Logo-1536x567.png 1536w, https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/Original-Logo-2048x756.png 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption class=\"wp-element-caption\"><a href=\"https:\/\/amalgamcs.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/amalgamcs.com\/<\/a><\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Keras is a high-level open-source neural network library written in Python. It is designed to be user-friendly, modular, and extensible, and can run on top of other popular machine learning frameworks such as TensorFlow, Theano, and CNTK. Keras provides a simple and intuitive API that allows users to quickly build [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Keras: Your Gateway to Neural Networks - AmalgamCS Tech Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Keras: Your Gateway to Neural Networks - AmalgamCS Tech Blog\" \/>\n<meta property=\"og:description\" content=\"Keras is a high-level open-source neural network library written in Python. It is designed to be user-friendly, modular, and extensible, and can run on top of other popular machine learning frameworks such as TensorFlow, Theano, and CNTK. Keras provides a simple and intuitive API that allows users to quickly build [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\" \/>\n<meta property=\"og:site_name\" content=\"AmalgamCS Tech Blog\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-26T18:22:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-26T18:41:17+00:00\" \/>\n<meta name=\"author\" content=\"Garrik Hoyt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Garrik Hoyt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\"},\"author\":{\"name\":\"Garrik Hoyt\",\"@id\":\"https:\/\/blog.amalgamcs.com\/#\/schema\/person\/97a98f183f3f756243c26dbed73f8922\"},\"headline\":\"Keras: Your Gateway to Neural Networks\",\"datePublished\":\"2023-03-26T18:22:47+00:00\",\"dateModified\":\"2023-03-26T18:41:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\"},\"wordCount\":551,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/#organization\"},\"articleSection\":[\"A.I.\/M.L.\/Data Science\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\",\"url\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\",\"name\":\"Keras: Your Gateway to Neural Networks - AmalgamCS Tech Blog\",\"isPartOf\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/#website\"},\"datePublished\":\"2023-03-26T18:22:47+00:00\",\"dateModified\":\"2023-03-26T18:41:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/blog.amalgamcs.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Keras: Your Gateway to Neural Networks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.amalgamcs.com\/#website\",\"url\":\"https:\/\/blog.amalgamcs.com\/\",\"name\":\"AmalgamCS Tech Blog\",\"description\":\"Curated information on the latest in tech\",\"publisher\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.amalgamcs.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.amalgamcs.com\/#organization\",\"name\":\"AmalgamCS\",\"url\":\"https:\/\/blog.amalgamcs.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.amalgamcs.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/cropped-cropped-Transparent-Logo.png\",\"contentUrl\":\"https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/cropped-cropped-Transparent-Logo.png\",\"width\":2493,\"height\":485,\"caption\":\"AmalgamCS\"},\"image\":{\"@id\":\"https:\/\/blog.amalgamcs.com\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.amalgamcs.com\/#\/schema\/person\/97a98f183f3f756243c26dbed73f8922\",\"name\":\"Garrik Hoyt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/blog.amalgamcs.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/91f854d9f252604310ae9cef7d5ab86d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/91f854d9f252604310ae9cef7d5ab86d?s=96&d=mm&r=g\",\"caption\":\"Garrik Hoyt\"},\"sameAs\":[\"http:\/\/blog.amalgamcs.com\"],\"url\":\"https:\/\/blog.amalgamcs.com\/index.php\/author\/amalgamdvlpmnt\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Keras: Your Gateway to Neural Networks - AmalgamCS Tech Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/","og_locale":"en_US","og_type":"article","og_title":"Keras: Your Gateway to Neural Networks - AmalgamCS Tech Blog","og_description":"Keras is a high-level open-source neural network library written in Python. It is designed to be user-friendly, modular, and extensible, and can run on top of other popular machine learning frameworks such as TensorFlow, Theano, and CNTK. Keras provides a simple and intuitive API that allows users to quickly build [&hellip;]","og_url":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/","og_site_name":"AmalgamCS Tech Blog","article_published_time":"2023-03-26T18:22:47+00:00","article_modified_time":"2023-03-26T18:41:17+00:00","author":"Garrik Hoyt","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Garrik Hoyt","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#article","isPartOf":{"@id":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/"},"author":{"name":"Garrik Hoyt","@id":"https:\/\/blog.amalgamcs.com\/#\/schema\/person\/97a98f183f3f756243c26dbed73f8922"},"headline":"Keras: Your Gateway to Neural Networks","datePublished":"2023-03-26T18:22:47+00:00","dateModified":"2023-03-26T18:41:17+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/"},"wordCount":551,"commentCount":0,"publisher":{"@id":"https:\/\/blog.amalgamcs.com\/#organization"},"articleSection":["A.I.\/M.L.\/Data Science"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/","url":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/","name":"Keras: Your Gateway to Neural Networks - AmalgamCS Tech Blog","isPartOf":{"@id":"https:\/\/blog.amalgamcs.com\/#website"},"datePublished":"2023-03-26T18:22:47+00:00","dateModified":"2023-03-26T18:41:17+00:00","breadcrumb":{"@id":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.amalgamcs.com\/index.php\/2023\/03\/26\/keras-your-gateway-to-neural-networks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.amalgamcs.com\/"},{"@type":"ListItem","position":2,"name":"Keras: Your Gateway to Neural Networks"}]},{"@type":"WebSite","@id":"https:\/\/blog.amalgamcs.com\/#website","url":"https:\/\/blog.amalgamcs.com\/","name":"AmalgamCS Tech Blog","description":"Curated information on the latest in tech","publisher":{"@id":"https:\/\/blog.amalgamcs.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.amalgamcs.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blog.amalgamcs.com\/#organization","name":"AmalgamCS","url":"https:\/\/blog.amalgamcs.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.amalgamcs.com\/#\/schema\/logo\/image\/","url":"https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/cropped-cropped-Transparent-Logo.png","contentUrl":"https:\/\/blog.amalgamcs.com\/wp-content\/uploads\/2023\/03\/cropped-cropped-Transparent-Logo.png","width":2493,"height":485,"caption":"AmalgamCS"},"image":{"@id":"https:\/\/blog.amalgamcs.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blog.amalgamcs.com\/#\/schema\/person\/97a98f183f3f756243c26dbed73f8922","name":"Garrik Hoyt","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.amalgamcs.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/91f854d9f252604310ae9cef7d5ab86d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/91f854d9f252604310ae9cef7d5ab86d?s=96&d=mm&r=g","caption":"Garrik Hoyt"},"sameAs":["http:\/\/blog.amalgamcs.com"],"url":"https:\/\/blog.amalgamcs.com\/index.php\/author\/amalgamdvlpmnt\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/posts\/169"}],"collection":[{"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/comments?post=169"}],"version-history":[{"count":3,"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/posts\/169\/revisions"}],"predecessor-version":[{"id":172,"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/posts\/169\/revisions\/172"}],"wp:attachment":[{"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/media?parent=169"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/categories?post=169"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.amalgamcs.com\/index.php\/wp-json\/wp\/v2\/tags?post=169"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}