@@ -409,4 +409,50 @@ SageMaker V2 Examples
409409#. `SageMaker Autopilot <src/sagemaker/automl/README.rst >`__
410410#. `Model Monitoring <https://sagemaker.readthedocs.io/en/stable/amazon_sagemaker_model_monitoring.html >`__
411411#. `SageMaker Debugger <https://sagemaker.readthedocs.io/en/stable/amazon_sagemaker_debugger.html >`__
412- #. `SageMaker Processing <https://sagemaker.readthedocs.io/en/stable/amazon_sagemaker_processing.html >`__
412+ #. `SageMaker Processing <https://sagemaker.readthedocs.io/en/stable/amazon_sagemaker_processing.html >`__
413+
414+ 🚀 Model Fine-Tuning Support Now Available in V3
415+ -------------------------------------------------
416+
417+ We're excited to announce model fine-tuning capabilities in SageMaker Python SDK V3!
418+
419+ **What's New **
420+
421+ Four new trainer classes for fine-tuning foundation models:
422+
423+ * SFTTrainer - Supervised fine-tuning
424+ * DPOTrainer - Direct preference optimization
425+ * RLAIFTrainer - RL from AI feedback
426+ * RLVRTrainer - RL from verifiable rewards
427+
428+ **Quick Example **
429+
430+ .. code :: python
431+
432+ from sagemaker.train import SFTTrainer
433+ from sagemaker.train.common import TrainingType
434+
435+ trainer = SFTTrainer(
436+ model = " meta-llama/Llama-2-7b-hf" ,
437+ training_type = TrainingType.LORA ,
438+ model_package_group_name = " my-models" ,
439+ training_dataset = " s3://bucket/train.jsonl"
440+ )
441+
442+ training_job = trainer.train()
443+
444+ **Key Features **
445+
446+ * ✨ LoRA & full fine-tuning
447+ * 📊 MLflow integration with real-time metrics
448+ * 🚀 Deploy to SageMaker or Bedrock
449+ * 📈 Built-in evaluation (11 benchmarks)
450+ * ☁️ Serverless training
451+
452+ **Get Started **
453+
454+ .. code :: python
455+
456+ pip install sagemaker>= 3.1 .0
457+
458+ `📓 Example notebooks <https://github.com/aws/sagemaker-python-sdk/tree/master/v3-examples/model-customization-examples >`__
0 commit comments