跳过导航
跳过mega-menu

First Steps 使用 LangChain 和 ChatGPT API — Hands-on insights

You can create a simple command line tool to interact with the ChatGPT API via the command line. 这是如何.

We will write a simple script in Python which reads the question via command line and connects to the ChatGPT API 使用 LangChain and retrieves an answer 和n stores the result of the interaction in a simple text file.

The generated text file captures the conversation and looks like this:

Langchain Session在2023-06-02T122046与gpt-3.5-turbo

2023-06-02T122147:
Q: Which are the most commonly used algorithms in computer science?
A: T在这里 are many algorithms used in computer science, but some of the most commonly used ones are:

1. Sorting algorithms: These are used to sort data in a specific order, 例如字母或数字.

2. Search algorithms: These are used to find specific data within a larger set of data.

3. Graph algorithms: These are used to analyze and manipulate graphs, which are used to represent relationships between objects.

4. Dynamic programming algorithms: These are used to solve complex problems by 打破ing them down into smaller, 更易于管理的子问题.

5. Divide and conquer algorithms: These are used to solve problems by 打破ing them down into smaller, 更易于管理的部分.

6. Greedy algorithms: These are used to solve optimization problems by making the best possible choice at each step.

7. Backtracking algorithms: These are used to solve problems by trying out different solutions and backtracking when a solution is found to be incorrect.

8. Randomized algorithms: These are used to solve problems by 使用 randomization to find a solution.

2023-06-02T122247:
Q: Which are the most popular sorting algorithms in computer science?
A: The most popular sorting algorithms in computer science are:

1. 冒泡排序
2. 选择排序
3. 插入排序
4. 归并排序
5. 快速排序
6. 堆排序
7. 基数排序
8. 计数排序
9. 桶排序
10. 壳类

These algorithms are widely used in various applications and are taught in most computer science courses.

2023-06-02T122428:
Q: Can you show me an implementation of 堆排序 written in Rust?
A: Sure, 在这里's an implementation of 堆排序 in Rust:

“‘生锈
fn heap_sort (加勒比海盗: &[32]) {
令len = 加勒比海盗.len ();
对于I (0..Len / 2).牧师(){
Heapify (加勒比海盗, len, i);
}
对于I (1..兰).牧师(){
加勒比海盗.交换(0,1);
Heapify (加勒比海盗, i, 0);
}
}

fn heapify(加勒比海盗: &Mut [i32], n: ussize, i: ussize) {
设mut最大= i;
令left = 2 * I + 1;
设右= 2 * I + 2;
if left < n && 加勒比海盗[left] > 加勒比海盗[largest] {
最大=左;
}
if right < n && 加勒比海盗[right] > 加勒比海盗[largest] {
最大=右;
}
如果最大 != i {
加勒比海盗.交换(我最大);
Heapify (加勒比海盗, n,最大);
}
}

Fn main() {
让mut = [5,2,9,1,5,6];
heap_sort (&傻瓜加勒比海盗);
println!("{:?}”,加勒比海盗);
}
```

在这个实现中, `heap_sort` takes a mutable reference to an 加勒比海盗ay of `i32` values and sorts it 使用 the 堆排序 algorithm. The `heapify` function is a helper function that recursively builds a max heap from the given 加勒比海盗ay. The `main` function demonstrates how to use the `heap_sort` function to sort an 加勒比海盗ay.

的必备条件

Conda, a Python package manager was used to create a dedicated 环境 in which LangChain  OpenAI 安装了软件包.

In order to create an 环境 with Conda we executed this command:

Conda create——name langchain python=3.10

以下是用于安装软件包的命令:

Conda install -c Conda -forge openai
Conda安装-c Conda -forge langchain

If you want to import the 环境 definitions I have 在这里 is a definition export file of the enviroment I created with Conda:

查看原始环境.yml 由❤主办 GitHub

You will also need to generate an API key the OpenAI website: http://platform.openai.com/account/api-keys

I ended up 使用 a paid account to be able to use the API. 你可以在这里查看更多全球最大的博彩平台付款方式的信息:

OpenAI API:

用于访问OpenAI开发的新AI模型的API

平台.openai.com


Python脚本

The script starts off by importing some standard Python libraries 和n LangChain packages which we need for our chat:

导入系统
进口操作系统
进口再保险
从pathlib导入路径
从datetime导入datetime

# LangChain
从langchain.chat_models导入ChatOpenAI
从langchain.模式导入(
HumanMessage
)

然后我们初始化一些配置参数, including the OpenAI key 和 model we would like to use.

#配置
#把你的API密钥放在这里
os.environ["OPENAI_API_KEY"] = ''

#把你的模型放在这里
#其他可能的聊天选项是'gpt-3.5 -涡轮- 0301.
Model_name = "gpt-3 ..5-turbo”

应该很快会有其他型号的. 我试过了 gpt-4 but that threw an error as I have no access to the public API Beta. 这里有一些 额外的信息 在其他模型上.

After this you just create the main object used to interact with the REST API’s of the model you want to use:

#初始化聊天对象.
chat = ChatOpenAI(model_name=model_name, 温度=0)

Here we initialize the chat object 使用 the selected model and a 温度 of 0, which is used to reduce the randomness of the responses. 的默认值 温度 is 1. 查看有关此参数的更多信息 在这里.

The script contains a method used to the generate the current date:

def generate_iso_date ():
Current_date =日期时间.现在()
返回重新.子(r \.\d+$", "", current_date.isoformat ().取代 (':', ''))

And a class which is used to capture the 内容 of the chat:

类ChatFile:
def __init__(自我, current_file: Path, model_name: str) -> None:
自我.Current_file = Current_file
自我.Model_name = Model_name
print(f"写入文件{current_file}")
打开(自我.当前文件,'w')作为f:
f.write(f"Langchain Session at {generate_iso_date()} with {自我.model_name} \ n \ n”)

Def store_to_file(自我,问题:str,答案:str):
打印(f“{答案}”)
打开(自我.Current_file, 'a')作为f:
f.write(f"{generate_iso_date()}:\nQ: { question}\nA: {answer}\n\n")

#创建聊天文件
chat_file = ChatFile(Path(f"{model_name}_{generate_iso_date()}.txt”),model_name)

And then a simple loop which reads the user input and sends the input to the ChatGPT model and receives the answer and displays it on the console and saves it on a local file.

sys中的For行.stdin:
print(f"[{model_name}]", end =">> ")
问题=行.带()
如果'q' ==问题:
打破
# The LLM takes a prompt as an input and outputs a completion
resp = chat([HumanMessage(内容=question)])
答案= resp.内容
chat_file.store_to_file(问题,回答)

全文如下:


结论

LangChain offers an easy integration with ChatGPT which you can use via a simple script like the one shown above. You just need to have an OpenAI key and in most cases a paid OpenAI account.

The ChatGPT 4 models are not yet accessible to the wider public, 所以现在大多数人将不得不坚持使用ChatGPT 3.5个模型.


吉尔·费尔南德斯,Onepoint咨询公司

十大正规博彩网站评级

在这里注册