{"id":252,"date":"2024-05-08T17:22:24","date_gmt":"2024-05-08T21:22:24","guid":{"rendered":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/?post_type=chapter&#038;p=252"},"modified":"2024-09-11T10:36:58","modified_gmt":"2024-09-11T14:36:58","slug":"generative-ai-reviews","status":"publish","type":"chapter","link":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/chapter\/generative-ai-reviews\/","title":{"raw":"Generative AI &amp; Reviews","rendered":"Generative AI &amp; Reviews"},"content":{"raw":"The topic of [pb_glossary id=\"253\"]generative AI (GAI)[\/pb_glossary] is fairly new and is evolving quickly. Although artificial intelligence more broadly has been around for decades, discussions about generative AI became widespread with the introduction of ChatGPT in 2022. Since then, there has been a dramatic increase in the creation of AI-assisted tools, which can be used to expedite tasks during the review process.\r\n\r\nThese tools can be used (with varying quality) to: summarize literature, extract information like themes, find sources on a topic, show relationships between works, and \u2018synthesize\u2019 findings (or so they claim). However, there are many issues with these tools and they should never be used without human intervention. When considering AI tools, ensure that you have permission (some journals do not allow AI authorship, although this is subject to change), double check the accuracy of the output, and always cite the tool and how it has been used.\r\n<h1>Issues with Generative AI<\/h1>\r\nAs with any tool, generative AI is not perfect. There are many issues and limitations, including:\r\n<ul>\r\n \t<li style=\"font-weight: 400\">Inaccuracy and errors<\/li>\r\n \t<li style=\"font-weight: 400\">Bias and discrimination<\/li>\r\n \t<li style=\"font-weight: 400\">Risks to privacy<\/li>\r\n<\/ul>\r\n<h2>Inaccuracy and errors<\/h2>\r\nThe content that is generated by [pb_glossary id=\"255\"]Large Language Models[\/pb_glossary] (LLMs) can be inaccurate or false. LLMs have been known to \u2018hallucinate\u2019, giving responses that are false or imaginative (IBM, n.d.). LLMs have also been trained with data up to a certain point in time and cannot account for new information past that cut-off date. For instance, ChatGPT 3.5 does not currently have knowledge of events that occurred after 2021 and can, therefore, produce outdated results (OpenAI, 2024). It\u2019s important to always verify any information that is generated by AI.\r\n<h2>Bias and discrimination<\/h2>\r\nLLMs have been created and trained by humans and are therefore not free from bias. Bias can appear in multiple stages, including data collection, data labeling, model training, and deployment (Chapman University, n.d.). LLMs can perpetuate stereotypes and cause discrimination; reinforce exclusion by perpetuating social norms; use harmful language; and can perform better for certain languages and social groups over others (Weidinger et. al., 2021).\r\n<h2>Risks to privacy<\/h2>\r\nTraining data can include personal and private information which can be revealed by an LLM as a response to a prompt, creating a privacy leak as with Scatterlab\u2019s chatbot Lee-Luda (Weidinger et. al., 2021). As LLMs can also include the information you have provided as part of their training data, that information is also at risk. It\u2019s important to treat all conversations with generative AI models as public, since you cannot control how the information is used once it has been added to a model\u2019s input. Never share sensitive information.\r\n\r\n&nbsp;\r\n\r\nThese are just three concerns with Generative AI, but there are many others, some of which have been mentioned in the <a href=\"https:\/\/learn.library.torontomu.ca\/artificialintelligence\/ai_issues\">Generative AI Guide<\/a>.\r\n<h1>AI in Literature &amp; Systematic Reviews<\/h1>\r\nThe use of Generative AI in academic reviews is a new area of research that is generating interest. Below are some early benefits, cautions and recommendations from van Dijk et al., (2023) unless otherwise stated.\r\n\r\nA benefit of using an AI tool is that it can save you time compared to other tools and methods.\r\n\r\nHowever, we must also consider that:\r\n<ul>\r\n \t<li style=\"font-weight: 400\">GAI tools miss some relevant articles for systematic reviews<\/li>\r\n \t<li style=\"font-weight: 400\">Deduplication of articles is required, as GAI deduplication is not always accurate<\/li>\r\n<\/ul>\r\nRecommendations:\r\n<ul>\r\n \t<li style=\"font-weight: 400\">Be transparent about your use of GAI in reviews: note it prominently in your methodology or equivalent section, and cite the tool(s) you use (See <a href=\"##citations\">Citations<\/a> for more information)<\/li>\r\n \t<li style=\"font-weight: 400\">Further research and guidelines are needed to ensure quality standards are met using GAI (Cacciamani et al., 2023).<\/li>\r\n<\/ul>\r\nTools and recommendations are rapidly changing;\u00a0 for current information, please see this guide which <a href=\"https:\/\/learn.library.torontomu.ca\/artificialintelligence\/ai-reviews\">includes GAI tools for academic reviews, current research on AI screening of academic reviews<\/a> and more; updated regularly by authors of this book.\r\n<h1><a id=\"#citations\"><\/a>Citations<\/h1>\r\nAs mentioned in the section above, transparency is vital if you are using GAI tools in your research, screening and\/or writing. Citation is part of this. GAI use in research is so new that style guides may not have current guidance, but many styles such as APA, MLA and Chicago Style have updated recommendations on their websites. For the most current recommendations, please see <a href=\"https:\/\/learn.library.torontomu.ca\/artificialintelligence\/ai-citations\">Citing Artificial Intelligence,<\/a> updated regularly by authors of this book.\r\n<h1>AI Prompts<\/h1>\r\nTo interact with a generative AI tool, you will need to give it a [pb_glossary id=\"254\"]prompt[\/pb_glossary]. Prompts are written in natural language, which can feel counterintuitive to keyword searching in databases. Typically, the more information you provide, the better.\r\n\r\nProviding generative AI tools with an effective prompt is an important part of getting the desired response. The following prompt guidance has been adapted from Harvard University Information Technology (2023).\r\n\r\n<strong>Be clear<\/strong>\r\n\r\nTell a GAI tool exactly what you would like it to do, as well as what you would like it not to do. Use the words \u2018do\u2019 and \u2018don\u2019t\u2019 to clarify your criteria. It also helps to be clear about how you would like to receive the output. Maybe you want a list or a couple of paragraphs or in the format of a letter. You can also use examples to provide further clarification, but be careful not to use copyrighted works as an example.\r\n\r\n<strong>Be specific<\/strong>\r\n\r\nAsking a GAI tool to do something generic like \u2018Write a speech\u2019 will produce an equally generic result. To improve this prompt, provide context and background information on your request. You can specify the tone of the response and the audience it is intended for. Typically, the more specific your request the better your result, with the caveat that sometimes being too specific may cause a hallucination as the AI model tries to fill in the blanks of what it does not know.\r\n\r\n<strong>Have a Conversation<\/strong>\r\n\r\nPart of the appeal of generative AI is that you can speak to it as you would another person and build on your request. If you do not get the results you want or would like the results to be modified, you can provide this feedback to the model. For instance, you could ask a chatbot to: summarize its answer in one paragraph, speak more about a specific aspect of its answer, or write in a more formal language. If you\u2019re stuck on a prompt or would like to improve a prompt, you can also ask the chatbot what it needs from you to fulfill this request. This is part of a process of refining your prompts through trial and error.\r\n<h1>Definitions<\/h1>\r\n<strong>Deep learning<\/strong> is what makes generative AI and LLMs possible. It \u201cuses neural networks with multiple layers to model and solve complex problems.\u201d (University of Manitoba Libraries, 2024)\r\n\r\n<strong>Generative artificial intelligence<\/strong> is a broad term that encompasses AI systems that generate content. These systems are trained on large amounts of data to produce a response to a user\u2019s prompt. They continually learn and improve on themselves. E.g., Text generators like ChatGPT, image generators like Midjourney, audio generators and video generators like Canva AI.\r\n\r\n<strong>Large Language Model (LLM)<\/strong> is a language model that uses deep learning and large training datasets to recognize, classify, create, predict, and summarize textual content. E.g.. Open AI\u2019s GPT-4\r\n\r\n<strong>Neural Networks<\/strong> process and analyze data for AI, using algorithms to identify patterns and relationships. They are intended to imitate the operation of a human brain (Chen, 2024).\r\n\r\n<strong>Prompts<\/strong> are the information entered into a GAI tool in order to receive an output. GAI analyzes the prompt, and generates a response based on relationships identified by its neural networks (Harvard University Information Technology, 2023).\r\n\r\n<hr \/>\r\n\r\nCacciamani, G. E., Chu, T. N., Sanford, D. I., Abreu, A., Duddalwar, V., Oberai, A., Kuo, C.-C. J., Liu, X., Denniston, A. K., Vasey, B., McCulloch, P., Wolff, R. F., Mallett, S., Mongan, J., Kahn, C. E., Jr, Sounderajah, V., Darzi, A., Dahm, P., Moons, K. G. M., \u2026 Hung, A. J. (2023). PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare. <em>Nature Medicine, 29<\/em>(1), 14\u201315.\r\n\r\nChapman University. (n.d.). <a href=\"https:\/\/www.chapman.edu\/ai\/bias-in-ai.aspx\"><em>Bias in AI<\/em><\/a>.\r\n\r\nChen, J. (2024, Feb. 7). <a href=\"https:\/\/www.investopedia.com\/terms\/n\/neuralnetwork.asp\">What Is a Neural Network?<\/a> <em>Investopedia<\/em>.\r\n\r\nHarvard University Information Technology (2023). <a href=\"https:\/\/huit.harvard.edu\/news\/ai-prompts\"><em>Getting started with prompts for text-based Generative AI tools<\/em><\/a>.\r\n\r\nIBM. (n.d.). <a href=\"https:\/\/www.ibm.com\/topics\/ai-hallucinations\"><em>What are AI Hallucinations?<\/em><\/a> Think.\r\n\r\nOpenAI. (2024). <a href=\"https:\/\/help.openai.com\/en\/articles\/6783457-what-is-chatgpt\"><em>What is ChatGPT?<\/em><\/a>\r\n\r\nUniversity of Manitoba Libraries. (2024).\u00a0<em><a href=\"https:\/\/libguides.lib.umanitoba.ca\/AIforResearch\">Definitions<\/a>.<\/em> Using Generative AI for Library Research.\r\n\r\nvan Dijk, S. H. B., Brusse-Keizer, M. G. J., Bucs\u00e1n, C. C., van der Palen, J., Doggen, C. J. M., &amp; Lenferink, A. (2023). Artificial intelligence in systematic reviews: promising when appropriately used. <em>BMJ Open, 13<\/em>(7), e072254\u2013e072254.\r\n\r\nWeidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., \u2026 Gabriel, I. (2021). <a href=\"http:\/\/arxiv.org\/abs\/2112.04359\">Ethical and social risks of harm from Language Models<\/a> (arXiv:2112.04359). <em>arXiv.<\/em>","rendered":"<p>The topic of <button class=\"glossary-term\" aria-describedby=\"252-253\">generative AI (GAI)<\/button> is fairly new and is evolving quickly. Although artificial intelligence more broadly has been around for decades, discussions about generative AI became widespread with the introduction of ChatGPT in 2022. Since then, there has been a dramatic increase in the creation of AI-assisted tools, which can be used to expedite tasks during the review process.<\/p>\n<p>These tools can be used (with varying quality) to: summarize literature, extract information like themes, find sources on a topic, show relationships between works, and \u2018synthesize\u2019 findings (or so they claim). However, there are many issues with these tools and they should never be used without human intervention. When considering AI tools, ensure that you have permission (some journals do not allow AI authorship, although this is subject to change), double check the accuracy of the output, and always cite the tool and how it has been used.<\/p>\n<h1>Issues with Generative AI<\/h1>\n<p>As with any tool, generative AI is not perfect. There are many issues and limitations, including:<\/p>\n<ul>\n<li style=\"font-weight: 400\">Inaccuracy and errors<\/li>\n<li style=\"font-weight: 400\">Bias and discrimination<\/li>\n<li style=\"font-weight: 400\">Risks to privacy<\/li>\n<\/ul>\n<h2>Inaccuracy and errors<\/h2>\n<p>The content that is generated by <button class=\"glossary-term\" aria-describedby=\"252-255\">Large Language Models<\/button> (LLMs) can be inaccurate or false. LLMs have been known to \u2018hallucinate\u2019, giving responses that are false or imaginative (IBM, n.d.). LLMs have also been trained with data up to a certain point in time and cannot account for new information past that cut-off date. For instance, ChatGPT 3.5 does not currently have knowledge of events that occurred after 2021 and can, therefore, produce outdated results (OpenAI, 2024). It\u2019s important to always verify any information that is generated by AI.<\/p>\n<h2>Bias and discrimination<\/h2>\n<p>LLMs have been created and trained by humans and are therefore not free from bias. Bias can appear in multiple stages, including data collection, data labeling, model training, and deployment (Chapman University, n.d.). LLMs can perpetuate stereotypes and cause discrimination; reinforce exclusion by perpetuating social norms; use harmful language; and can perform better for certain languages and social groups over others (Weidinger et. al., 2021).<\/p>\n<h2>Risks to privacy<\/h2>\n<p>Training data can include personal and private information which can be revealed by an LLM as a response to a prompt, creating a privacy leak as with Scatterlab\u2019s chatbot Lee-Luda (Weidinger et. al., 2021). As LLMs can also include the information you have provided as part of their training data, that information is also at risk. It\u2019s important to treat all conversations with generative AI models as public, since you cannot control how the information is used once it has been added to a model\u2019s input. Never share sensitive information.<\/p>\n<p>&nbsp;<\/p>\n<p>These are just three concerns with Generative AI, but there are many others, some of which have been mentioned in the <a href=\"https:\/\/learn.library.torontomu.ca\/artificialintelligence\/ai_issues\">Generative AI Guide<\/a>.<\/p>\n<h1>AI in Literature &amp; Systematic Reviews<\/h1>\n<p>The use of Generative AI in academic reviews is a new area of research that is generating interest. Below are some early benefits, cautions and recommendations from van Dijk et al., (2023) unless otherwise stated.<\/p>\n<p>A benefit of using an AI tool is that it can save you time compared to other tools and methods.<\/p>\n<p>However, we must also consider that:<\/p>\n<ul>\n<li style=\"font-weight: 400\">GAI tools miss some relevant articles for systematic reviews<\/li>\n<li style=\"font-weight: 400\">Deduplication of articles is required, as GAI deduplication is not always accurate<\/li>\n<\/ul>\n<p>Recommendations:<\/p>\n<ul>\n<li style=\"font-weight: 400\">Be transparent about your use of GAI in reviews: note it prominently in your methodology or equivalent section, and cite the tool(s) you use (See <a href=\"##citations\">Citations<\/a> for more information)<\/li>\n<li style=\"font-weight: 400\">Further research and guidelines are needed to ensure quality standards are met using GAI (Cacciamani et al., 2023).<\/li>\n<\/ul>\n<p>Tools and recommendations are rapidly changing;\u00a0 for current information, please see this guide which <a href=\"https:\/\/learn.library.torontomu.ca\/artificialintelligence\/ai-reviews\">includes GAI tools for academic reviews, current research on AI screening of academic reviews<\/a> and more; updated regularly by authors of this book.<\/p>\n<h1><a id=\"#citations\"><\/a>Citations<\/h1>\n<p>As mentioned in the section above, transparency is vital if you are using GAI tools in your research, screening and\/or writing. Citation is part of this. GAI use in research is so new that style guides may not have current guidance, but many styles such as APA, MLA and Chicago Style have updated recommendations on their websites. For the most current recommendations, please see <a href=\"https:\/\/learn.library.torontomu.ca\/artificialintelligence\/ai-citations\">Citing Artificial Intelligence,<\/a> updated regularly by authors of this book.<\/p>\n<h1>AI Prompts<\/h1>\n<p>To interact with a generative AI tool, you will need to give it a <button class=\"glossary-term\" aria-describedby=\"252-254\">prompt<\/button>. Prompts are written in natural language, which can feel counterintuitive to keyword searching in databases. Typically, the more information you provide, the better.<\/p>\n<p>Providing generative AI tools with an effective prompt is an important part of getting the desired response. The following prompt guidance has been adapted from Harvard University Information Technology (2023).<\/p>\n<p><strong>Be clear<\/strong><\/p>\n<p>Tell a GAI tool exactly what you would like it to do, as well as what you would like it not to do. Use the words \u2018do\u2019 and \u2018don\u2019t\u2019 to clarify your criteria. It also helps to be clear about how you would like to receive the output. Maybe you want a list or a couple of paragraphs or in the format of a letter. You can also use examples to provide further clarification, but be careful not to use copyrighted works as an example.<\/p>\n<p><strong>Be specific<\/strong><\/p>\n<p>Asking a GAI tool to do something generic like \u2018Write a speech\u2019 will produce an equally generic result. To improve this prompt, provide context and background information on your request. You can specify the tone of the response and the audience it is intended for. Typically, the more specific your request the better your result, with the caveat that sometimes being too specific may cause a hallucination as the AI model tries to fill in the blanks of what it does not know.<\/p>\n<p><strong>Have a Conversation<\/strong><\/p>\n<p>Part of the appeal of generative AI is that you can speak to it as you would another person and build on your request. If you do not get the results you want or would like the results to be modified, you can provide this feedback to the model. For instance, you could ask a chatbot to: summarize its answer in one paragraph, speak more about a specific aspect of its answer, or write in a more formal language. If you\u2019re stuck on a prompt or would like to improve a prompt, you can also ask the chatbot what it needs from you to fulfill this request. This is part of a process of refining your prompts through trial and error.<\/p>\n<h1>Definitions<\/h1>\n<p><strong>Deep learning<\/strong> is what makes generative AI and LLMs possible. It \u201cuses neural networks with multiple layers to model and solve complex problems.\u201d (University of Manitoba Libraries, 2024)<\/p>\n<p><strong>Generative artificial intelligence<\/strong> is a broad term that encompasses AI systems that generate content. These systems are trained on large amounts of data to produce a response to a user\u2019s prompt. They continually learn and improve on themselves. E.g., Text generators like ChatGPT, image generators like Midjourney, audio generators and video generators like Canva AI.<\/p>\n<p><strong>Large Language Model (LLM)<\/strong> is a language model that uses deep learning and large training datasets to recognize, classify, create, predict, and summarize textual content. E.g.. Open AI\u2019s GPT-4<\/p>\n<p><strong>Neural Networks<\/strong> process and analyze data for AI, using algorithms to identify patterns and relationships. They are intended to imitate the operation of a human brain (Chen, 2024).<\/p>\n<p><strong>Prompts<\/strong> are the information entered into a GAI tool in order to receive an output. GAI analyzes the prompt, and generates a response based on relationships identified by its neural networks (Harvard University Information Technology, 2023).<\/p>\n<hr \/>\n<p>Cacciamani, G. E., Chu, T. N., Sanford, D. I., Abreu, A., Duddalwar, V., Oberai, A., Kuo, C.-C. J., Liu, X., Denniston, A. K., Vasey, B., McCulloch, P., Wolff, R. F., Mallett, S., Mongan, J., Kahn, C. E., Jr, Sounderajah, V., Darzi, A., Dahm, P., Moons, K. G. M., \u2026 Hung, A. J. (2023). PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare. <em>Nature Medicine, 29<\/em>(1), 14\u201315.<\/p>\n<p>Chapman University. (n.d.). <a href=\"https:\/\/www.chapman.edu\/ai\/bias-in-ai.aspx\"><em>Bias in AI<\/em><\/a>.<\/p>\n<p>Chen, J. (2024, Feb. 7). <a href=\"https:\/\/www.investopedia.com\/terms\/n\/neuralnetwork.asp\">What Is a Neural Network?<\/a> <em>Investopedia<\/em>.<\/p>\n<p>Harvard University Information Technology (2023). <a href=\"https:\/\/huit.harvard.edu\/news\/ai-prompts\"><em>Getting started with prompts for text-based Generative AI tools<\/em><\/a>.<\/p>\n<p>IBM. (n.d.). <a href=\"https:\/\/www.ibm.com\/topics\/ai-hallucinations\"><em>What are AI Hallucinations?<\/em><\/a> Think.<\/p>\n<p>OpenAI. (2024). <a href=\"https:\/\/help.openai.com\/en\/articles\/6783457-what-is-chatgpt\"><em>What is ChatGPT?<\/em><\/a><\/p>\n<p>University of Manitoba Libraries. (2024).\u00a0<em><a href=\"https:\/\/libguides.lib.umanitoba.ca\/AIforResearch\">Definitions<\/a>.<\/em> Using Generative AI for Library Research.<\/p>\n<p>van Dijk, S. H. B., Brusse-Keizer, M. G. J., Bucs\u00e1n, C. C., van der Palen, J., Doggen, C. J. M., &amp; Lenferink, A. (2023). Artificial intelligence in systematic reviews: promising when appropriately used. <em>BMJ Open, 13<\/em>(7), e072254\u2013e072254.<\/p>\n<p>Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., \u2026 Gabriel, I. (2021). <a href=\"http:\/\/arxiv.org\/abs\/2112.04359\">Ethical and social risks of harm from Language Models<\/a> (arXiv:2112.04359). <em>arXiv.<\/em><\/p>\n<div class=\"glossary\"><div class=\"glossary__tooltip\" id=\"252-253\" hidden><p>A broad term that encompasses AI systems that generate content. These systems are trained on large amounts of data to produce a response to a user\u2019s prompt.<\/p>\n<\/div><div class=\"glossary__tooltip\" id=\"252-255\" hidden><p>A language model that uses deep learning and large training datasets to recognize, classify, create, predict, and summarize textual content.<\/p>\n<\/div><div class=\"glossary__tooltip\" id=\"252-254\" hidden><p>The information entered into a GAI tool in order to receive an output.<\/p>\n<\/div><\/div>","protected":false},"author":519,"menu_order":12,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-252","chapter","type-chapter","status-publish","hentry"],"part":87,"_links":{"self":[{"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/chapters\/252","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/wp\/v2\/users\/519"}],"version-history":[{"count":10,"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/chapters\/252\/revisions"}],"predecessor-version":[{"id":506,"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/chapters\/252\/revisions\/506"}],"part":[{"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/parts\/87"}],"metadata":[{"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/chapters\/252\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/wp\/v2\/media?parent=252"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/pressbooks\/v2\/chapter-type?post=252"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/wp\/v2\/contributor?post=252"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.library.torontomu.ca\/graduatereviews3\/wp-json\/wp\/v2\/license?post=252"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}